-
公开(公告)号:US10475225B2
公开(公告)日:2019-11-12
申请号:US15124811
申请日:2015-12-18
Applicant: INTEL CORPORATION
Inventor: Minje Park , Tae-Hoon Kim , Myung-Ho Ju , Jihyeon Yi , Xiaolu Shen , Lidan Zhang , Qiang Li
Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor. The machine learning component may be trained using a set of training images that depict human facial expressions and avatar animation authored by professional animators to reflect facial expressions depicted in the set of training images.
-
公开(公告)号:US20180353836A1
公开(公告)日:2018-12-13
申请号:US15574109
申请日:2016-12-30
Applicant: Intel Corporation
Inventor: Qiang Eric Li , Wenlong Li , Shaohui Jiao , Yikai Fang , Xiaolu Shen
CPC classification number: A63B71/06 , G06K9/00342 , G06K9/00711 , G06K9/00724 , G06K9/44 , G06K2009/00738 , G06T7/20 , G06T2207/30221 , G06T2207/30241 , G09B19/0038
Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
-