Abstract:
A processor-implemented method including implementing a deep neural network (DNN) model using input data, generating, by implementing the DNN model, first output data from the DNN model, changing the DNN model, generating, by implementing the changed DNN model using the input data, second output data of the changed DNN model, and determining result data by combining the first output data and the second output data.
Abstract:
A processor-implemented neural network method includes: receiving input data; obtaining a plurality of parameter vectors representing a hierarchical-hyperspherical space comprising a plurality of spheres belonging to a plurality of layers; applying the plurality of parameter vectors to generate a neural network; and generate an inference result by processing the input data using the neural network.
Abstract:
A method and apparatus with emotion recognition acquires a plurality of pieces of data corresponding a plurality of inputs for each modality and corresponding to a plurality of modalities; determines a dynamics representation vector corresponding to each of the plurality of modalities based on a plurality of features for each modality extracted from the plurality of pieces of data; determines a fused representation vector based on the plurality of dynamics representation vectors corresponding to the plurality of modalities; and recognizes an emotion of a user based on the fused representation vector.
Abstract:
A facial expression recognition apparatus and method and a facial expression training apparatus and method are provided. The facial expression recognition apparatus generates a speech map indicating a correlation between a speech and each portion of an object based on a speech model, extracts a facial expression feature associated with a facial expression based on a facial expression model, and recognizes a facial expression of the object based on the speech map and the facial expression feature. The facial expression training apparatus trains the speech model and the facial expression model.
Abstract:
Disclosed is a facial verification apparatus and method. The facial verification apparatus is configured to detect a face area of a user from an obtained input image, generate a plurality of image patches, differently including respective portions of the detected face area, based on a consideration of an image patch set determination criterion with respect to the detected face area, extract a feature value corresponding to a face of the user based on an image patch set including the generated plurality of image patches, determine whether a facial verification is successful based on the extracted feature value, and indicate a result of the determination of whether the facial verification is successful.
Abstract:
A liveness test method and apparatus is disclosed. The liveness test method includes detecting a face region in an input image for a test target, implementing a first liveness test to determine a first liveness value based on a first image corresponding to the detected face region, implementing a second liveness test to determine a second liveness value based on a second image corresponding to a partial face region of the detected face region, implementing a third liveness test to determine a third liveness value based on an entirety of the input image or a full region of the input image that includes the detected face region and a region beyond the detected face region, and determining a result of the liveness test based on the first liveness value, the second liveness value, and the third liveness value.
Abstract:
A method and apparatus for generating a facial expression may receive an input image, and generate facial expression images that change from the input image based on an index indicating a facial expression intensity of the input image, the index being obtained from the input image
Abstract:
Provided is a method and apparatus to recognizing an object based on an attribute of the object and training that may calculate object age information from input data using an attribute layer trained with respect to an attribute of an object and a classification layer trained with respect to a classification of the object. The method to recognize the object includes extracting feature data from input data including an object using an object model, determining attribute classification information related to the input data from the feature data using a classification layer, determining attribute age information related to an attribute from the feature data using an attribute layer, and estimating object age information based on the attribute classification information and the attribute age information.
Abstract:
A method of preprocessing an image including biological information is disclosed, in which an image preprocessor may set an edge line in an input image including biological information, calculate an energy value corresponding to the edge line, and adaptively crop the input image based on the energy value.
Abstract:
A liveness test method and apparatus is disclosed. The liveness test method includes detecting a face region in an input image for a test target, implementing a first liveness test to determine a first liveness value based on a first image corresponding to the detected face region, implementing a second liveness test to determine a second liveness value based on a second image corresponding to a partial face region of the detected face region, implementing a third liveness test to determine a third liveness value based on an entirety of the input image or a full region of the input image that includes the detected face region and a region beyond the detected face region, and determining a result of the liveness test based on the first liveness value, the second liveness value, and the third liveness value.