Abstract:
A method and apparatus for modeling a three-dimensional (3D) face, and a method and apparatus for tracking a face. The method for modeling the 3D face may set a predetermined reference 3D face to be a working model, and generate a result of tracking including at least one of a face characteristic point, an expression parameter, and a head pose parameter from a video frame, based on the working model, to output the result of the tracking.
Abstract:
Provided are a positioning method and apparatus. The positioning method includes acquiring a plurality of positioning results including positions of key points of a facial area included in an input image, respectively using a plurality of predetermined positioning models, evaluating the plurality of positioning results using an evaluation model of the positions of the key points, and updating at least one of the plurality of predetermined positioning models and the evaluation model based on a positioning result that is selected, based on a result of the evaluating, from among the plurality of positioning results.
Abstract:
A face tracking apparatus includes: a face region detector; a segmentation unit; an occlusion probability calculator; and a tracking unit. The face region detector is configured to detect a face region based on an input image. The segmentation unit is configured to segment the face region into a plurality of sub-regions. The occlusion probability calculator configured to calculate occlusion probabilities for the plurality of sub-regions. The tracking unit is configured to track a face included in the input image based on the occlusion probabilities.
Abstract:
Provided is a method of detecting and tracking lips accurately despite a change in a head pose. A plurality of lips rough models and a plurality of lips precision models may be provided, among which a lips rough model corresponding to a head pose may be selected, such that lips may be detected by the selected lips rough model, a lips precision model having a lip shape most similar to the detected lips may be selected, and the lips may be detected accurately using the lips precision model.