-
公开(公告)号:US09679412B2
公开(公告)日:2017-06-13
申请号:US14443337
申请日:2014-06-20
申请人: INTEL CORPORATION , Minje Park , Olivier Duchenne , Yeongjae Cheon , Tae-Hoon Kim , Xiaolu Shen , Yangzhou Du , Wooju Ryu , Myung-Ho Ju
发明人: Minje Park , Olivier Duchenne , Yeongjae Cheon , Tae-Hoon Kim , Xiaolu Shen , Yangzhou Du , Wooju Ryu , Myung-Ho Ju
CPC分类号: G06T17/205 , G06K9/00208 , G06K9/00214 , G06K9/00221 , G06K9/00248 , G06K9/00261 , G06K9/00281 , G06T7/251 , G06T17/20 , G06T2200/04 , G06T2207/30201
摘要: Apparatuses, methods and storage medium associated with 3D face model reconstruction are disclosed herein. In embodiments, an apparatus may include a facial landmark detector, a model fitter and a model tracker. The facial landmark detector may be configured to detect a plurality of landmarks of a face and their locations within each of a plurality of image frames. The model fitter may be configured to generate a 3D model of the face from a 3D model of a neutral face, in view of detected landmarks of the face and their locations within a first one of the plurality of image frames. The model tracker may be configured to maintain the 3D model to track the face in subsequent image frames, successively updating the 3D model in view of detected landmarks of the face and their locations within each of successive ones of the plurality of image frames. In embodiments, the facial landmark detector may include a face detector, an initial facial landmark detector, and one or more facial landmark detection linear regressors. Other embodiments may be described and/or claimed.
-
公开(公告)号:US11449592B2
公开(公告)日:2022-09-20
申请号:US17066138
申请日:2020-10-08
申请人: Intel Corporation
发明人: Wenlong Li , Xiaolu Shen , Lidan Zhang , Jose E. Lorenzo , Qiang Li , Steven Holmes , Xiaofeng Tong , Yangzhou Du , Mary Smiley , Alok Mishra
IPC分类号: G06F3/048 , G06F21/32 , H04L9/40 , H04L9/32 , G06F21/30 , H04W12/06 , H04W12/065 , H04W12/68 , G06V40/20 , G06F3/01 , G06F21/36 , G06T19/00
摘要: An example apparatus is disclosed herein that includes a memory and at least one processor. The at least one processor is to execute instructions to: select a gesture from a database, the gesture including a sequence of poses; translate the selected gesture into an animated avatar performing the selected gesture for display at a display device; display a prompt for the user to perform the selected gesture performed by the animated avatar; capture an image of the user performing the selected gesture; and perform a comparison between a gesture performed by the user in the captured image and the selected gesture to determine whether there is a match between the gesture performed by the user and the selected gesture.
-
公开(公告)号:US10776980B2
公开(公告)日:2020-09-15
申请号:US16241937
申请日:2019-01-07
申请人: Intel Corporation
发明人: Shaohui Jiao , Xiaolu Shen , Lidan Zhang , Qiang Li , Wenlong Li
摘要: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
-
公开(公告)号:US10828549B2
公开(公告)日:2020-11-10
申请号:US15574109
申请日:2016-12-30
申请人: Intel Corporation
发明人: Qiang Eric Li , Wenlong Li , Shaohui Jiao , Yikai Fang , Xiaolu Shen , Lidan Zhang , Xiaofeng Tong , Fucen Zeng
摘要: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
-
公开(公告)号:US10803157B2
公开(公告)日:2020-10-13
申请号:US14911390
申请日:2015-03-28
申请人: Intel Corporation
发明人: Wenlong Li , Xiaolu Shen , Lidan Zhang , Jose E. Lorenzo , Qiang Li , Steven Holmes , Xiaofeng Tong , Yangzhou Du , Mary Smiley , Alok Mishra
IPC分类号: G06F3/048 , G06F21/32 , H04L29/06 , H04L9/32 , H04W12/06 , G06F21/30 , G06F3/01 , G06F21/36 , G06K9/00 , G06T19/00
摘要: A mechanism is described to facilitate gesture matching according to one embodiment. A method of embodiments, as described herein, includes selecting a gesture from a database during an authentication phase, translating the selected gesture into an animated avatar, displaying the avatar, prompting a user to perform the selected gesture, capturing a real-time image of the user and comparing the gesture performed by the user in the captured image to the selected gesture to determine whether there is a match.
-
公开(公告)号:US10176619B2
公开(公告)日:2019-01-08
申请号:US15102200
申请日:2015-07-30
申请人: Intel Corporation
发明人: Shaohui Jiao , Xiaolu Shen , Lidan Zhang , Qiang Li , Wenlong Li
摘要: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
-
公开(公告)号:US10475225B2
公开(公告)日:2019-11-12
申请号:US15124811
申请日:2015-12-18
申请人: INTEL CORPORATION
发明人: Minje Park , Tae-Hoon Kim , Myung-Ho Ju , Jihyeon Yi , Xiaolu Shen , Lidan Zhang , Qiang Li
摘要: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor. The machine learning component may be trained using a set of training images that depict human facial expressions and avatar animation authored by professional animators to reflect facial expressions depicted in the set of training images.
-
公开(公告)号:US20180353836A1
公开(公告)日:2018-12-13
申请号:US15574109
申请日:2016-12-30
申请人: Intel Corporation
发明人: Qiang Eric Li , Wenlong Li , Shaohui Jiao , Yikai Fang , Xiaolu Shen
CPC分类号: A63B71/06 , G06K9/00342 , G06K9/00711 , G06K9/00724 , G06K9/44 , G06K2009/00738 , G06T7/20 , G06T2207/30221 , G06T2207/30241 , G09B19/0038
摘要: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
-
公开(公告)号:US11383144B2
公开(公告)日:2022-07-12
申请号:US17093215
申请日:2020-11-09
申请人: Intel Corporation
发明人: Qiang Eric Li , Wenlong Li , Shaohui Jiao , Yikai Fang , Xiaolu Shen , Lidan Zhang , Xiaofeng Tong , Fucen Zeng
摘要: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
-
公开(公告)号:US20210069571A1
公开(公告)日:2021-03-11
申请号:US17093215
申请日:2020-11-09
申请人: Intel Corporation
发明人: Qiang Eric Li , Wenlong Li , Shaohui Jiao , Yikai Fang , Xiaolu Shen , Lidan Zhang , Xiaofeng Tong , Fucen Zeng
摘要: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
-
-
-
-
-
-
-
-
-