Abstract:
In response to movement of an underlying structure, motion of complex objects connected to that structure may be simulated relatively quickly and without requiring extensive processing capabilities. A skeleton extraction method is used to simplify the complex object. Tracking is used to track the motion of the underlying structure, such as the user's head in a case where motion of hair is being simulated. Thus, the simulated motion is driven in response to the extent and direction of head or facial movement. A mass-spring model may be used to accelerate the simulation in some embodiments.
Abstract:
Techniques related to key person recognition in multi-camera immersive video attained for a scene are discussed. Such techniques include detecting predefined person formations in the scene based on an arrangement of the persons in the scene, generating a feature vector for each person in the detected formation, and applying a classifier to the feature vectors to indicate one or more key persons in the scene.
Abstract:
Techniques related to performing object or person association or correspondence in multi-view video are discussed. Such techniques include determining correspondences at a particular time instance based on separately optimizing correspondence sub-matrices for distance sub-matrices based on two-way minimum distance pairs between frame pairs, generating and fusing tracklets across time instances, and adjusting correspondence, after such tracklet processing, via elimination of outlier object positions and rearrangement of object correspondence.
Abstract:
Example gesture matching mechanisms are disclosed herein. An example machine readable storage device or disc includes instructions that, when executed, cause programmable circuitry to at least: prompt a user to perform gestures to register the user, randomly select at least one of the gestures for authentication of the user, prompt the user to perform the at least one selected gesture, translate the gesture into an animated avatar for display at a display device, the animated avatar including a face, analyze performance of the gesture by the user, and authenticate the user based on the performance of the gesture.
Abstract:
Techniques related to validating an image based 3D model of a scene are discussed. Such techniques include detecting an object within a captured image used to generate the scene, projecting the 3D model to a view corresponding to the captured image to generate a reconstructed image, and comparing image regions of the captured and reconstructed images corresponding to the object to validate the 3D model.
Abstract:
Techniques for media quality control may include receiving media information and determining the quality of the media information. The media information may be presented when the quality of the media information meets a quality control threshold. A warning may be generated when the quality of the media information does not meet the quality control threshold. Other embodiments are described and claimed
Abstract:
Apparatuses, methods and storage medium associated with the provision of an avatar keyboard to a communication/computing device are disclosed herein. In embodiments, an apparatus for communicating may comprise one or more processors to execute an application; and a keyboard module coupled with the one or more processor to provide a plurality of keyboards in a corresponding plurality of keyboard modes for inputting to the application, including an avatar keyboard, in an avatar keyboard mode. The avatar keyboard may include a plurality of avatar keys with corresponding avatars that can be dynamically customized or animated based at least in part on facial expressions or head poses of a user, prior to input to the application. Other embodiments may be disclosed and/or claimed.
Abstract:
Apparatuses, methods and storage medium associated with creating an avatar video are disclosed herein. In embodiments, the apparatus may one or more facial expression engines, an animation-rendering engine, and a video generator. The one or more facial expression engines may be configured to receive video, voice and/or text inputs, and, in response, generate a plurality of animation messages having facial expression parameters that depict facial expressions for a plurality of avatars based at least in part on the video, voice and/or text inputs received. The animation-rendering engine may be configured to receive the one or more animation messages, and drive a plurality of avatar models, to animate and render the plurality of avatars with the facial expression depicted. The video generator may be configured to capture the animation and rendering of the plurality of avatars, to generate a video. Other embodiments may be described and/or claimed.
Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, an apparatus may include a facial mesh tracker to receive a plurality of image frames, detect facial action movements of a face and head pose gestures of a head within the plurality of image frames, and output a plurality of facial motion parameters and head pose parameters that depict facial action movements and head pose gestures detected, all in real time, for animation and rendering of an avatar. The facial action movements and head pose gestures may be detected through inter-frame differences for a mouth and an eye, or the head, based on pixel sampling of the image frames. The facial action movements may include opening or closing of a mouth, and blinking of an eye. The head pose gestures may include head rotation such as pitch, yaw, roll, and head movement along horizontal and vertical direction, and the head comes closer or goes farther from the camera. Other embodiments may be described and/or claimed.