Abstract:
Apparatus, systems, media and/or methods may involve animating avatars. User facial motion data may be extracted that corresponds to one or more user facial gestures observed by an image capture device when a user emulates a source object. An avatar animation may be provided based on the user facial motion data. Also, script data may be provided to the user and/or the user facial motion data may be extracted when the user utilizes the script data. Moreover, audio may be captured and/or converted to a predetermined tone. Source facial motion data may be extracted and/or an avatar animation may be provided based on the source facial motion data. A degree of match may be determined between the user facial motion data of a plurality of users and the source facial motion data. The user may select an avatar as a user avatar and/or a source object avatar.
Abstract:
Systems and methods may provide for identifying one or more facial expressions of a subject in a video signal and generating avatar animation data based on the one or more facial expressions. Additionally, the avatar animation data may be incorporated into an audio file associated with the video signal. In one example, the audio file is sent to a remote client device via a messaging application. Systems and methods may also facilitate the generation of avatar icons and doll animations that mimic the actual facial features and/or expressions of specific individuals.
Abstract:
The invention pertains to specialized software code that is incorporated within one or more subject software applications resident on an electronic device where the subject software application's icon represented visually on the device's display changes its visual properties to communicate changes to the user of such device as and when one or more of the functionalities, capabilities, and/or other characteristics of the subject software application therein have changed, whether by design, change circumstance, or per the instruction of a third party (e.g., the subject application software provider, device provider, or bandwidth provider).
Abstract:
Examples of systems and methods for transmitting facial motion data and animating an avatar are generally described herein. A system may include an image capture device to capture a series of images of a face, a facial recognition module to compute facial motion data for each of the images in the series of images, and a communication module to transmit the facial motion data to an animation device, wherein the animation device is to use the facial motion data to animate an avatar on the animation device.
Abstract:
A data connection method for establishing a data connection between a bandwidth user on a network connected user device and a data source on a network connected target device over a communication network where the user's bandwidth consumption activities incur a bandwidth usage charge from the network operator providing the bandwidth connection, may comprise providing a bandwidth access software application with a bandwidth access software application identification code associated with both the bandwidth access software application and the provider of such bandwidth access software application; allowing user access to at least one specified online data address to access target content hosted on at least one target content source determined by the provider of the bandwidth access software application; monitoring and recording the online access activity and providing to an application activity system registry server the end user device identification code, application identification code and the recorded online access activity.
Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. An apparatus may include an avatar animation engine (104) configured to receive a plurality of facial motion parameters and a plurality of head gestures parameters, respectively associated with a face and a head of a user. The plurality of facial motion parameters may depict facial action movements of the face, and the plurality of head gesture parameters may depict head pose gestures of the head. Further, the avatar animation engine (104) may be configured to drive an avatar model with facial and skeleton animations to animate an avatar, using the facial motion parameters and the head gestures parameters, to replicate a facial expression of the user on the avatar that includes impact of head post rotation of the user.
Abstract:
A method for distributed generation of an avatar with a facial expression corresponding to a facial expression of a user includes capturing real-time video of a user of a local computing device (102). The computing device (102) extracts facial parameters of the user's facial expression using the captured video and transmits the extracted facial parameters to server (106). The server (106) generates an avatar video of an avatar having a facial expression corresponding to the user's facial expression as a function of the extracted facial parameters and transmits the avatar video to a remote computing device (108).
Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, an apparatus may include a facial mesh tracker to receive a plurality of image frames, detect facial action movements of a face and head pose gestures of a head within the plurality of image frames, and output a plurality of facial motion parameters and head pose parameters that depict facial action movements and head pose gestures detected, all in real time, for animation and rendering of an avatar. The facial action movements and head pose gestures may be detected through inter-frame differences for a mouth and an eye, or the head, based on pixel sampling of the image frames. The facial action movements may include opening or closing of a mouth, and blinking of an eye. The head pose gestures may include head rotation such as pitch, yaw, roll, and head movement along horizontal and vertical direction, and the head comes closer or goes farther from the camera. Other embodiments may be described and/or claimed.
Abstract:
Systems and methods may provide for detecting a condition with respect to one or more frames of a video signal associated with a set of facial motion data and modifying, in response to the condition, the set of facial motion data to indicate that the one or more frames lack facial motion data. Additionally, an avatar animation may be initiated based on the modified set of facial motion data. In one example, the condition is one or more of a buffer overflow condition and a tracking failure condition.
Abstract:
The invention pertains to specialized software code that is incorporated within one or more subject software and/or applications resident on an electronic communications device that monitors, audits, provisions, and bills for bandwidth paid for by any one or more of a bandwidth provider, a third-party, or an end-user, individually or in unison.