Abstract:
An example apparatus is disclosed herein that includes a memory and at least one processor. The at least one processor is to execute instructions to: select a gesture from a database, the gesture including a sequence of poses; translate the selected gesture into an animated avatar performing the selected gesture for display at a display device; display a prompt for the user to perform the selected gesture performed by the animated avatar; capture an image of the user performing the selected gesture; and perform a comparison between a gesture performed by the user in the captured image and the selected gesture to determine whether there is a match between the gesture performed by the user and the selected gesture.
Abstract:
A mechanism is described to facilitate gesture matching according to one embodiment. A method of embodiments, as described herein, includes selecting a gesture from a database during an authentication phase, translating the selected gesture into an animated avatar, displaying the avatar, prompting a user to perform the selected gesture, capturing a real-time image of the user and comparing the gesture performed by the user in the captured image to the selected gesture to determine whether there is a match.
Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, an apparatus may include a facial expression and speech tracker to respectively receive a plurality of image frames and audio of a user, and analyze the image frames and the audio to determine and track facial expressions and speech of the user. The tracker may further select a plurality of blend shapes, including assignment of weights of the blend shapes, for animating the avatar, based on tracked facial expressions or speech of the user. The tracker may select the plurality of blend shapes, including assignment of weights of the blend shapes, based on the tracked speech of the user, when visual conditions for tracking facial expressions of the user are determined to be below a quality threshold. Other embodiments may be disclosed and/or claimed.
Abstract:
Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
Abstract:
Methods, systems, and storage media for generating and displaying animations of simulated biomechanical motions are disclosed. In embodiments, a computer device may obtain sensor data of a sensor affixed to a user's body or equipment used by the user, and may use inverse kinematics to determine desired positions and orientations of an avatar based on the sensor data. In embodiments, the computer device may adjust or alter the avatar based on the inverse kinematics, and generate an animation for display based on the adjusted avatar. Other embodiments may be disclosed and/or claimed.
Abstract:
Apparatuses, methods and storage medium associated with generating and animating avatars are disclosed herein. In embodiments, an apparatus may comprise an avatar generator to receive an image having a face of a user; analyze the image to identify various facial and related components of the user; access an avatar database to identify corresponding artistic renditions for the various facial and related components stored in the database; and combine the corresponding artistic renditions for the various facial and related components to form an avatar, without user intervention. In embodiments, the apparatus may further comprise an avatar animation engine to animate the avatar in accordance with a plurality of animation messages having facial expression or head pose parameters that describe facial expressions or head poses of a user determined from an image of the user. Other embodiments may be disclosed and/or claimed.
Abstract:
Apparatuses, methods and storage medium associated with capturing images are disclosed herein. In embodiments, the apparatus may include a face tracker to receive an image frame, analyze the image frame for a face, and on identification of a face in the image frame, evaluate the face to determine whether the image frame comprises an acceptable or unacceptable face pose. Further, the face tracker may be configured to provide instructions for taking another image frame, on determination of the image frame having an unacceptable face pose, with the instructions designed to improve likelihood that the other image frame will comprise an acceptable face pose. Other embodiments may be described and/or claimed.
Abstract:
Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, In embodiments, an apparatus may include an avatar animation engine configured to receive a plurality of facial motion parameters and a plurality of head gestures parameters, respectively associated with a face and a head of a user. The plurality of facial motion parameters may depict facial action movements of the face, and the plurality of head gesture parameters may depict head pose gestures of the head. Further, the avatar animation engine may be configured to drive an avatar model with facial and skeleton animations to animate an avatar, using the facial motion parameters and the head gestures parameters, to replicate a facial expression of the user on the avatar that includes impact of head post rotation of the user. Other embodiments may be described and/or claimed.
Abstract:
A video capture and processing system includes a memory configured to store a pose database. The pose database includes poses that indicate a start or stoppage in an event. The system also includes a processor operatively coupled to the memory. The processor is configured to generate a pose of an individual in a video frame of captured video of the event. The pose can be three-dimensional pose or a two-dimensional pose. The processor is also configured to determine, based on the pose database, whether the pose of the individual indicates a start or a stoppage in the event. The processor is further configured to control an upload of video of the event based on the determination of whether the pose indicates the start or the stoppage in the event.