Abstract:
Video analysis may be used to determine who is watching television and their level of interest in the current programming. Lists of favorite programs may be derived for each of a plurality of viewers of programming on the same television receiver.
Abstract:
Video analysis may be used to determine who is watching television and their level of interest in the current programming. Lists of favorite programs may be derived for each of a plurality of viewers of programming on the same television receiver.
Abstract:
Systems, devices and methods are described including recovering camera parameters and sparse key points for multiple 2D facial images and applying a multi-view stereo process to generate a dense avatar mesh using the camera parameters and sparse key points. The dense avatar mesh may then be used to generate a 3D face model and multi-view texture synthesis may be applied to generate a texture image for the 3D face model.
Abstract:
Techniques related to key person recognition in multi-camera immersive video attained for a scene are discussed. Such techniques include detecting predefined person formations in the scene based on an arrangement of the persons in the scene, generating a feature vector for each person in the detected formation, and applying a classifier to the feature vectors to indicate one or more key persons in the scene.
Abstract:
Methods and apparatus to detect collision of a virtual camera with objects in a three-dimensional volumetric model are disclosed herein. An example virtual camera system disclosed herein includes cameras to obtain images of a scene in an environment. The example virtual camera system also includes a virtual camera generator to create a 3D volumetric model of the scene based on the images, identify a 3D location of a virtual camera to be disposed in the 3D volumetric model, and detect whether a collision occurs between the virtual camera and one or more objects in the 3D volumetric model.
Abstract:
Examples of systems and methods for three-dimensional model customization for avatar animation using a sketch image selection are generally described herein. A method for rendering a three-dimensional model may include presenting a plurality of sketch images to a user on a user interface, and receiving a selection of sketch images from the plurality of sketch images to compose a face. The method may include rendering the face as a three-dimensional model, the three-dimensional model for use as an avatar.
Abstract:
Apparatuses, methods and storage medium associated with capturing images are provided. An apparatus may include a face tracker to receive an image frame, analyze the image frame for a face, and on identification of a face in the image frame, evaluate the face to determine whether the image frame comprises an acceptable or unacceptable face pose. Further, the face tracker may be configured to provide instructions for taking another image frame, on determination of the image frame having an unacceptable face pose, with the instructions designed to improve likelihood that the other image frame will comprise an acceptable face pose.
Abstract:
Apparatuses, methods and storage medium associated with creating an avatar video are disclosed herein. In embodiments, the apparatus may one or more facial expression engines, an animation-rendering engine, and a video generator. The one or more facial expression engines may be configured to receive video, voice and/or text inputs, and, in response, generate a plurality of animation messages having facial expression parameters that depict facial expressions for a plurality of avatars based at least in part on the video, voice and/or text inputs received. The animation-rendering engine may be configured to receive the one or more animation messages, and drive a plurality of avatar models, to animate and render the plurality of avatars with the facial expression depicted. The video generator may be configured to capture the animation and rendering of the plurality of avatars, to generate a video. Other embodiments may be described and/or claimed.
Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. An apparatus may include an avatar animation engine (104) configured to receive a plurality of facial motion parameters and a plurality of head gestures parameters, respectively associated with a face and a head of a user. The plurality of facial motion parameters may depict facial action movements of the face, and the plurality of head gesture parameters may depict head pose gestures of the head. Further, the avatar animation engine (104) may be configured to drive an avatar model with facial and skeleton animations to animate an avatar, using the facial motion parameters and the head gestures parameters, to replicate a facial expression of the user on the avatar that includes impact of head post rotation of the user.