Emotion augmented avatar animation

    公开(公告)号:US10776980B2

    公开(公告)日:2020-09-15

    申请号:US16241937

    申请日:2019-01-07

    申请人: Intel Corporation

    摘要: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.

    Positional analysis using computer vision sensor synchronization

    公开(公告)号:US10828549B2

    公开(公告)日:2020-11-10

    申请号:US15574109

    申请日:2016-12-30

    申请人: Intel Corporation

    摘要: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.

    Emotion augmented avatar animation

    公开(公告)号:US10176619B2

    公开(公告)日:2019-01-08

    申请号:US15102200

    申请日:2015-07-30

    申请人: Intel Corporation

    摘要: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.

    Avatar animation system
    6.
    发明授权

    公开(公告)号:US10475225B2

    公开(公告)日:2019-11-12

    申请号:US15124811

    申请日:2015-12-18

    申请人: INTEL CORPORATION

    IPC分类号: G06T13/40 G06T7/73 G06T17/20

    摘要: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor. The machine learning component may be trained using a set of training images that depict human facial expressions and avatar animation authored by professional animators to reflect facial expressions depicted in the set of training images.

    Positional analysis using computer vision sensor synchronization

    公开(公告)号:US11383144B2

    公开(公告)日:2022-07-12

    申请号:US17093215

    申请日:2020-11-09

    申请人: Intel Corporation

    摘要: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.

    POSITIONAL ANALYSIS USING COMPUTER VISION SENSOR SYNCHRONIZATION

    公开(公告)号:US20210069571A1

    公开(公告)日:2021-03-11

    申请号:US17093215

    申请日:2020-11-09

    申请人: Intel Corporation

    摘要: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.

    AVATAR ANIMATION SYSTEM
    10.
    发明申请

    公开(公告)号:US20200051306A1

    公开(公告)日:2020-02-13

    申请号:US16655686

    申请日:2019-10-17

    申请人: INTEL CORPORATION

    IPC分类号: G06T13/40 G06T7/73

    摘要: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor. The machine learning component may be trained using a set of training images that depict human facial expressions and avatar animation authored by professional animators to reflect facial expressions depicted in the set of training images.