Abstract:
Embodiments relate to a companion animal identification method including acquiring a preview image for capturing a face of a target companion animal, checking if the face of the target companion animal is aligned according to a preset criterion, capturing the face of the target companion animal when it is determined that the face of the target companion animal is aligned, and identifying the target companion animal by extracting features from a face image of the target companion animal having an aligned face view, and an identification system for performing the same.
Abstract:
Embodiments relate to a method for generating a video synopsis including receiving a user query; performing an object based analysis of a source video; and generating a synopsis video in response to a video synopsis generation request from a user, and a system therefor. The video synopsis generated by the embodiments reflects the user's desired interaction.
Abstract:
Exemplary embodiments relate to a method for unlocking a mobile device using authentication based on ear recognition including obtaining an image of a target showing at least part of the target's body in a lock state, extracting a set of ear features of the target from the image of the target, when the image of the target includes at least part of the target's ear, and determining if the extracted set of ear features of the target satisfies a preset condition, and a mobile device performing the same.
Abstract:
Embodiments relate to a method including obtaining m measured values for each field sensor by measuring with respect to a first sensor group including first type of field sensors and a second sensor group including different second type of field sensors, which are attached to the rigid body, at m time steps; and calibrating a sensor frame of the first type of field sensor and a sensor frame of the second type of field sensor by using a correlation between the first type of field sensor and the second type of field sensor based on measured values of at least some of the m time steps, wherein the multiple field sensors include different field sensors of a magnetic field sensor, an acceleration sensor, and a force sensor, and a system therefor.
Abstract:
A method for automatic facial impression transformation includes extracting landmark points for elements of a target face whose facial impression is to be transformed as well as distance vectors respectively representing distances of the landmark points, comparing the distance vectors to select a learning data set similar to the target face from a database, extracting landmark points and distance vectors from the learning data set, transforming a local feature of the target face based on the landmark points of the learning data set and score data for a facial impression, and transforming a global feature of the target face based on the distance vectors of the learning data set and the score data for the facial impression. Accordingly, a facial impression may be transformed in various ways while keeping an identity of a corresponding person.
Abstract:
A video deblurring method based on a layered blur model includes estimating a latent image, an object motion and a mask for each layer in each frame using images consisting of a combination of layers during an exposure time of a camera when receiving a blurred video frame, applying the estimated latent image, object motion and mask for each layer in each frame to the layered blur model to generate a blurry frame, comparing the generated blurry frame and the received blurred video frame, and outputting a final latent image based on the estimated object motion and mask for each layer in each frame, when the generated blurry frame and the received blurred video frame match. Accordingly, by modeling a blurred image as an overlap of images consisting of a combination of foreground and background during exposure, more accurate deblurring results at object boundaries can be obtained.
Abstract:
Disclosed is an apparatus for generating a facial composite image, which includes: a database in which face image and partial feature image information is stored; a wireframe unit configured to apply a face wireframe to a basic face sketch image, the face wireframe applying an active weight to each intersecting point; a face composing unit configured to form a two-dimensional face model to which the wireframe is applied, by composing images selected from the database; and a model transforming unit configured to transform the two-dimensional face model according to a user input on the basis of the two-dimensional face model to which the wireframe is applied. Accordingly, a facial composite image with improved accuracy may be generated efficiently.
Abstract:
Provided is a method and apparatus for inferring a facial composite, whereby user's designation regarding at least one point of a facial image is received, facial feature information are extracted based on the received user's designation, a facial type that coincides with the extracted facial feature information is searched from a facial composite database to generate a face shape model based on the searched facial type and an initial facial composite model having a facial type similar to a face of the facial image from a low-resolution facial image through which face recognition or identification cannot be accurately performed is provided, so that the face shape model contributes to criminal arrest and a low-resolution facial image captured by a surveillance camera can be more efficiently used.
Abstract:
Provided is an image recognition apparatus that identifies a person through a face of the person recognized from a photographed image, an image recognition method thereof and a face image generation method thereof.The method for recognizing a face image includes generating candidate images including at least one morphable image which is generated using an object image, extracting first features from the generated candidate images, extracting second feature from a reference image, and generating at least one score corresponding to each of the at least one morphable image by comparing the first features with the second feature, and performing matching to calculate a final score from the generated at least one score.
Abstract:
An image generator is provided which obtains a specular image and a diffuse image from an image acquired by a polarized light field camera by separating two reflection components of a subject, and a control method thereof. The image generator may include a main lens, a polarizing filter part, a photosensor, a microlens array, and a controller that generates a single image in response to the electrical image signal and extracts, from the generated image, a specular image and a diffuse image that exhibit different reflection characteristics of the subject.