Abstract:
A method with face recognition includes: determining a first global feature of a first face image and a first global feature of a second face image based on a local feature of the first face image and a local feature of the second face image, respectively; determining a final global feature of the first face image based on the first global feature of the first face image and a second global feature of the first face image; determining a final global feature of the second face image based on the first global feature and a second global feature of the second face image; and recognizing the first face image and the second face image based on the final global feature of the first face image and the final global feature of the second face image, wherein the second global feature of the first face image is determined based on the local feature of the first face image, and the second global feature of the second face image is determined based on the local feature of the second face image.
Abstract:
A user authentication method and a user authentication apparatus acquire an input image including a frontalized face of a user, calculate a confidence map including confidence values, for authenticating the user, corresponding to pixels with values maintained in a depth image of the frontalized face of the user among pixels included in the input image, extract a second feature vector from a second image generated based on the input image and the confidence map, acquire a first feature vector corresponding to an enrolled image, and perform authentication of the user based on a correlation between the first feature vector and the second feature vector.
Abstract:
Face recognition of a face, to determine whether the face correlates with an enrolled face, may include generating a personalized three-dimensional (3D) face model based on a two-dimensional (2D) input image of the face, acquiring 3D shape information and a normalized 2D input image of the face based on the personalized 3D face model, generating feature information based on the 3D shape information and pixel color values of the normalized 2D input image, and comparing the feature information with feature information associated with the enrolled face. The feature information may include first and second feature information generated based on applying first and second deep neural network models to the pixel color values of the normalized 2D input image and the 3D shape information, respectively. The personalized 3D face model may be generated based on transforming a generic 3D face model based on landmarks detected in the 2D input image.
Abstract:
A processor-implemented verification method includes: detecting a characteristic of an input image; acquiring input feature transformation data and enrolled feature transformation data by respectively transforming input feature data and enrolled feature data based on the detected characteristic, wherein the input feature data is extracted from the input image using a feature extraction model; and verifying a user corresponding to the input image based on a result of comparison between the input feature transformation data and the enrolled feature transformation data.
Abstract:
A fingerprint verification method and a fingerprint verification apparatus performing the fingerprint verification method are disclosed. The fingerprint verification apparatus determines a first similarity between a query fingerprint image and each of registered fingerprint images, selects a target registered fingerprint image group from registered fingerprint image groups based on the first similarity, determines a second similarity between the query fingerprint image and each of registered fingerprint images in the target registered fingerprint image group based on matching relationship information between the registered fingerprint images in the target registered fingerprint image group, and determines whether fingerprint verification of the query fingerprint image is successful based on the second similarity.
Abstract:
A method and an apparatus for registering a face, and a method and an apparatus for recognizing a face are disclosed, in which a face registering apparatus may change a stored three-dimensional (3D) facial model to an individualized 3D facial model based on facial landmarks extracted from two-dimensional (2D) face images, match the individualized 3D facial model to a current 2D face image of the 2D face images, and extract an image feature of the current 2D face image from regions in the current 2D face image to which 3D feature points of the individualized 3D facial model are projected, and a face recognizing apparatus may perform facial recognition based on image features of the 2D face images extracted by the face registering apparatus.
Abstract:
A processor-implemented method includes: identifying input components of a semiconductor pattern of an original input image from the original input image corresponding to an application target of a process for manufacturing a semiconductor, generating an augmented input image by transforming a transformation target comprising one or more of the input components from the original input image; and executing a neural model for estimating pattern transformation according to the process based on the augmented input image.
Abstract:
A fingerprint verification method and apparatus is disclosed. The fingerprint verification method may include obtaining an input fingerprint image, determining a matching region between the input fingerprint image and a registered fingerprint image, determining a similarity corresponding to the matching region, representing a determined indication of similarities between the input fingerprint image and the registered fingerprint image, relating the determined similarity to the matching region as represented in a matching region-based similarity, determining a result of a verification of the input fingerprint image based on the matching region-based similarity, and indicating the result of the verification.
Abstract:
Face recognition of a face, to determine whether the face correlates with an enrolled face, may include generating a personalized three-dimensional (3D) face model based on a two-dimensional (2D) input image of the face, acquiring 3D shape information and a normalized 2D input image of the face based on the personalized 3D face model, generating feature information based on the 3D shape information and pixel color values of the normalized 2D input image, and comparing the feature information with feature information associated with the enrolled face. The feature information may include first and second feature information generated based on applying first and second deep neural network models to the pixel color values of the normalized 2D input image and the 3D shape information, respectively. The personalized 3D face model may be generated based on transforming a generic 3D face model based on landmarks detected in the 2D input image.
Abstract:
A method of generating a three-dimensional (3D) face model includes extracting feature points of a face from input images comprising a first face image and a second face image; deforming a generic 3D face model to a personalized 3D face model based on the feature points; projecting the personalized 3D face model to each of the first face image and the second face image; and refining the personalized 3D face model based on a difference in texture patterns between the first face image to which the personalized 3D face model is projected and the second face image to which the personalized 3D face model is projected.