Abstract:
Apparatuses, methods and storage medium associated with 3D face model reconstruction are disclosed herein. In embodiments, an apparatus may include a facial landmark detector, a model fitter and a model tracker. The facial landmark detector may be configured to detect a plurality of landmarks of a face and their locations within each of a plurality of image frames. The model fitter may be configured to generate a 3D model of the face from a 3D model of a neutral face, in view of detected landmarks of the face and their locations within a first one of the plurality of image frames. The model tracker may be configured to maintain the 3D model to track the face in subsequent image frames, successively updating the 3D model in view of detected landmarks of the face and their locations within each of successive ones of the plurality of image frames. In embodiments, the facial landmark detector may include a face detector, an initial facial landmark detector, and one or more facial landmark detection linear regressors. Other embodiments may be described and/or claimed.
Abstract:
A mechanism is described to facilitate gesture matching according to one embodiment. A method of embodiments, as described herein, includes selecting a gesture from a database during an authentication phase, translating the selected gesture into an animated avatar, displaying the avatar, prompting a user to perform the selected gesture, capturing a real-time image of the user and comparing the gesture performed by the user in the captured image to the selected gesture to determine whether there is a match.
Abstract:
A mechanism is described to facilitate dynamic selection of avatars according to one embodiment. A method of embodiments, as described herein, includes acquiring user attributes, analyzing the user attributes and facilitating selection of an avatar based on the user attributes.
Abstract:
Apparatuses, methods and storage medium associated with generating and animating avatars are disclosed. The apparatus may comprise an avatar generator to receive an image having a face of a user; analyze the image to identify various facial and related components of the user; access an avatar database to identify corresponding artistic renditions for the various facial and related components stored in the database; and combine the corresponding artistic renditions for the various facial and related components to form an avatar, without user intervention. The apparatus may further comprise an avatar animation engine to animate the avatar in accordance with a plurality of animation messages having facial expression or head pose parameters that describe facial expressions or head poses of a user determined from an image of the user.
Abstract:
A mechanism is described for facilitating efficient free in-plane rotation landmark tracking of images on computing devices according to one embodiment. A method of embodiments, as described herein, includes detecting a first frame having a first image and a second frame having a second image, where the second image is rotated to a position away from the first image. The method may further include assigning a first parameter line and a second parameter line to the second image based on landmark positions associated with the first and second images, detecting a rotation angle between the first parameter line and the second parameter line, and rotating the second image back and forth within a distance associated with the rotation angle to verify positions of the first and second images.
Abstract:
Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
Abstract:
Apparatuses, methods and storage medium associated with capturing images are provided. An apparatus may include a face tracker to receive an image frame, analyze the image frame for a face, and on identification of a face in the image frame, evaluate the face to determine whether the image frame comprises an acceptable or unacceptable face pose. Further, the face tracker may be configured to provide instructions for taking another image frame, on determination of the image frame having an unacceptable face pose, with the instructions designed to improve likelihood that the other image frame will comprise an acceptable face pose.
Abstract:
Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. The real-time provision of high quality avatar animation is enabled, at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor. The machine learning component may be trained using a set of training images that depict human facial expressions and avatar animation authored by professional animators to reflect facial expressions depicted in the set of training images.