Abstract:
A method and apparatus for modeling a three-dimensional (3D) face, and a method and apparatus for tracking a face. The method for modeling the 3D face may set a predetermined reference 3D face to be a working model, and generate a result of tracking including at least one of a face characteristic point, an expression parameter, and a head pose parameter from a video frame, based on the working model, to output the result of the tracking.
Abstract:
An image processing method and apparatus using a neural network are provided. The image processing method includes generating a plurality of augmented features by augmenting an input feature, and generating a prediction result based on the plurality of augmented features.
Abstract:
A gaze estimation method and apparatus is disclosed. The gaze estimation method includes obtaining an image including an eye region of a user, extracting, from the obtained image, a first feature of data, obtaining a second feature of data used for calibration of a neural network model, and estimating a gaze of the user using the first feature and the second feature.
Abstract:
A processor implemented method of processing a facial expression image, the method includes acquiring an expression feature of each of at least two reference facial expression images; generating a new expression feature based on an interpolation value of the expression feature; and adjusting a target facial expression image based on the new expression feature and creating a new facial expression image.
Abstract:
A method and apparatus of generating a three-dimensional (3D) image are provided. The method of generating a 3D image involves acquiring a plurality of images of a 3D object with a camera, calculating pose information of the plurality of images based on pose data for each of the plurality of images measured by an inertial measurement unit, and generating a 3D image corresponding to the 3D object based on the pose information.
Abstract:
An image capturing apparatus and an image capturing method are provided. The image capturing apparatus includes an image capturing unit configured to capture an image; and a controller connected to the image capturing unit, wherein the controller is configured to obtain a background image with depth information, position a three-dimensional (3D) virtual image representing a target object in the background image based on the depth information, and control the image capturing unit to capture the target object based on a difference between the target object viewed from the image capturing apparatus and the 3D virtual image in the background image.
Abstract:
A face tracking apparatus includes: a face region detector; a segmentation unit; an occlusion probability calculator; and a tracking unit. The face region detector is configured to detect a face region based on an input image. The segmentation unit is configured to segment the face region into a plurality of sub-regions. The occlusion probability calculator configured to calculate occlusion probabilities for the plurality of sub-regions. The tracking unit is configured to track a face included in the input image based on the occlusion probabilities.
Abstract:
A storage system includes: a storage device to store an array of data elements associated with a sort operation; a storage interface to facilitate communications between the storage device and a host computer; and a reconfigurable processing device communicably connected to the storage device, the reconfigurable processing device including: memory to store input data read from the storage device, the input data corresponding to the array of data elements stored in the storage device; and a kernel including one or more compute components to execute the sort operation on the input data stored in the memory according to a SORT command received from the host computer. The reconfigurable processing device is to dynamically instantiate the one or more compute components to accelerate the sort operation.
Abstract:
A method to analyze a facial image includes: inputting a facial image to a residual network including residual blocks that are sequentially combined and arranged in a direction from an input to an output; processing the facial image using the residual network; and acquiring an analysis map from an output of an N-th residual block among the residual blocks using a residual deconvolution network, wherein the residual network transfers the output of the N-th residual block to the residual deconvolution network, and N is a natural number that is less than a number of all of the residual blocks, and wherein the residual deconvolution network includes residual deconvolution blocks that are sequentially combined, and the residual deconvolution blocks correspond to respective residual blocks from a first residual block among the residual blocks to the N-th residual block.
Abstract:
Provided is a method of detecting and tracking lips accurately despite a change in a head pose. A plurality of lips rough models and a plurality of lips precision models may be provided, among which a lips rough model corresponding to a head pose may be selected, such that lips may be detected by the selected lips rough model, a lips precision model having a lip shape most similar to the detected lips may be selected, and the lips may be detected accurately using the lips precision model.