Abstract:
Embodiments relate to a user authentication device configured to detect a face region in a target object image including at least part of a face of a target object, recognize masked or unmasked in the face region, extract target object characteristics data from the face region of the target object image, call reference data and authenticate if the target object is a registered device user based on the called reference data and the target object characteristics data. The reference data is generated from an unmasked image of the registered device user.
Abstract:
Embodiments relate to a companion animal identification method including acquiring a preview image for capturing a face of a target companion animal, checking if the face of the target companion animal is aligned according to a preset criterion, capturing the face of the target companion animal when it is determined that the face of the target companion animal is aligned, and identifying the target companion animal by extracting features from a face image of the target companion animal having an aligned face view, and an identification system for performing the same.
Abstract:
A method for automatic facial impression transformation includes extracting landmark points for elements of a target face whose facial impression is to be transformed as well as distance vectors respectively representing distances of the landmark points, comparing the distance vectors to select a learning data set similar to the target face from a database, extracting landmark points and distance vectors from the learning data set, transforming a local feature of the target face based on the landmark points of the learning data set and score data for a facial impression, and transforming a global feature of the target face based on the distance vectors of the learning data set and the score data for the facial impression. Accordingly, a facial impression may be transformed in various ways while keeping an identity of a corresponding person.
Abstract:
A method for automatic facial impression transformation includes extracting landmark points for elements of a target face whose facial impression is to be transformed as well as distance vectors respectively representing distances of the landmark points, comparing the distance vectors to select a learning data set similar to the target face from a database, extracting landmark points and distance vectors from the learning data set, transforming a local feature of the target face based on the landmark points of the learning data set and score data for a facial impression, and transforming a global feature of the target face based on the distance vectors of the learning data set and the score data for the facial impression. Accordingly, a facial impression may be transformed in various ways while keeping an identity of a corresponding person.
Abstract:
Embodiments relate to a method for re-identifying a target object based on location information of closed-circuit television (CCTV) and movement information of the target object and a system for performing the same, the method including detecting at least one object of interest in a plurality of source videos based on a preset condition of the object of interest, tracking the identified object of interest on the corresponding source video to generate a tube of the object of interest, receiving an image query including a target patch and location information of the CCTV, determining at least one search candidate area based on the location information of the CCTV and the movement information of the target object, re-identifying if the object of interest seen in the tube of the object of interest is the target object, and providing a user with the tube of the re-identified object of interest.
Abstract:
Exemplary embodiments relate to a method for selecting an image of interest to construct a retrieval database including receiving an image captured by an imaging device, detecting an object of interest in the received image, selecting an image of interest based on at least one of complexity of the image in which the object of interest is detected and image quality of the object of interest, and storing information related to the image of interest in the retrieval database, and an image control system performing the same.
Abstract:
Disclosed is an apparatus for generating a facial composite image, which includes: a database in which face image and partial feature image information is stored; a wireframe unit configured to apply a face wireframe to a basic face sketch image, the face wireframe applying an active weight to each intersecting point; a face composing unit configured to form a two-dimensional face model to which the wireframe is applied, by composing images selected from the database; and a model transforming unit configured to transform the two-dimensional face model according to a user input on the basis of the two-dimensional face model to which the wireframe is applied. Accordingly, a facial composite image with improved accuracy may be generated efficiently.
Abstract:
An image generator is provided which obtains a specular image and a diffuse image from an image acquired by a polarized light field camera by separating two reflection components of a subject, and a control method thereof. The image generator may include a main lens, a polarizing filter part, a photosensor, a microlens array, and a controller that generates a single image in response to the electrical image signal and extracts, from the generated image, a specular image and a diffuse image that exhibit different reflection characteristics of the subject.
Abstract:
Embodiments relate to a method for updating query information for tracing a target object from multi-camera including receiving a query information update command including query information for tracing a target object from multi-camera, searching for at least one image displaying the target object among a plurality of images captured by the multiple cameras, and updating the query information of a query image based on the at least one found image, and a multi-camera system performing the same.
Abstract:
Embodiments relate to a method including obtaining m measured values for each field sensor by measuring with respect to a first sensor group including first type of field sensors and a second sensor group including different second type of field sensors, which are attached to the rigid body, at m time steps; and calibrating a sensor frame of the first type of field sensor and a sensor frame of the second type of field sensor by using a correlation between the first type of field sensor and the second type of field sensor based on measured values of at least some of the m time steps, wherein the multiple field sensors include different field sensors of a magnetic field sensor, an acceleration sensor, and a force sensor, and a system therefor.