-
公开(公告)号:US20160014392A1
公开(公告)日:2016-01-14
申请号:US14553912
申请日:2014-11-25
Applicant: Microsoft Technology Licensing, LLC.
Inventor: Lin Liang , Christian F. Huitema , Matthew Adam Simari , Sean Eron Anderson
CPC classification number: G06T11/60 , G06K9/00201 , G06K9/00234 , G06K9/4619 , G06T7/11 , G06T7/162 , G06T2207/10024 , G06T2207/10028 , G06T2207/30201 , H04N7/18 , H04N13/239 , H04N13/271
Abstract: A method for operating an image processing device coupled to a color camera and a depth camera is provided. The method includes receiving a color image of a 3-dimensional scene from a color camera, receiving a depth map of the 3-dimensional scene from a depth camera, generating an aligned 3-dimensional face mesh from a plurality of color images received from the color camera indicating movement of a subject's head within the 3-dimensional scene and form the depth map, determining a head region based the depth map, segmenting the head region into a plurality of facial sections based on both the color image, depth map, and the aligned 3-dimensional face mesh, and overlaying the plurality of facial sections on the color image.
Abstract translation: 提供一种用于操作耦合到彩色照相机和深度相机的图像处理装置的方法。 该方法包括从彩色摄像机接收三维场景的彩色图像,从深度摄像机接收三维场景的深度图,从从多个彩色摄像机接收的多个彩色图像中生成对准的三维面部网格 彩色摄像机,其指示被摄体的头部在3维场景内的运动并形成深度图,基于深度图确定头部区域,基于彩色图像,深度图和头部区域将头部区域分割成多个面部区段 对齐的三维面部网格,并且在彩色图像上覆盖多个面部部分。
-
公开(公告)号:US09959627B2
公开(公告)日:2018-05-01
申请号:US14705900
申请日:2015-05-06
Applicant: Microsoft Technology Licensing, LLC
Inventor: Nikolay Smolyanskiy , Christian F. Huitema , Cha Zhang , Lin Liang , Sean Eron Anderson , Zhengyou Zhang
CPC classification number: G06T7/20 , G06K9/00 , G06T7/579 , G06T13/20 , G06T17/00 , G06T2200/08 , G06T2207/10012 , G06T2207/10016 , G06T2207/30201
Abstract: A three-dimensional shape parameter computation system and method for computing three-dimensional human head shape parameters from two-dimensional facial feature points. A series of images containing a user's face is captured. Embodiments of the system and method deduce the 3D parameters of the user's head by examining a series of captured images of the user over time and in a variety of head poses and facial expressions, and then computing an average. An energy function is constructed over a batch of frames containing 2D face feature points obtained from the captured images, and the energy function is minimized to solve for the head shape parameters valid for the batch of frames. Head pose parameters and facial expression and animation parameters can vary over each captured image in the batch of frames. In some embodiments this minimization is performed using a modified Gauss-Newton minimization technique using a single iteration.
-
公开(公告)号:US10274737B2
公开(公告)日:2019-04-30
申请号:US15056804
申请日:2016-02-29
Applicant: Microsoft Technology Licensing, LLC
Inventor: Nikolai Smolyanskiy , Zhengyou Zhang , Sean Eron Anderson , Michael Hall
IPC: G09G5/36 , G02B27/01 , B60R11/04 , G06T3/00 , H04N5/232 , H04N19/00 , H04N7/18 , H04N13/00 , H04N21/414 , H04N21/4223
Abstract: A vehicle camera system captures and transmits video to a user device, which includes a viewing device for playback of the captured video, such as virtual reality or augmented reality glasses. A rendering map is generated that indicates which pixels of the video frame (as identified by particular coordinates of the video frame) correspond to which coordinates of a virtual sphere in which a portion of the video frame is rendered for display. When a video frame is received, the rendering map is used to determine the texture values (e.g., colors) for coordinates in the virtual sphere, which is used to generate the display for the user. This technique reduces the rendering time when a user turns his or her head (e.g., while in virtual reality) and so it reduces motion and/or virtual reality sickness induced by the rendering lag.
-
公开(公告)号:US09767586B2
公开(公告)日:2017-09-19
申请号:US14553912
申请日:2014-11-25
Applicant: Microsoft Technology Licensing, LLC
Inventor: Lin Liang , Christian F. Huitema , Matthew Adam Simari , Sean Eron Anderson
CPC classification number: G06T11/60 , G06K9/00201 , G06K9/00234 , G06K9/4619 , G06T7/11 , G06T7/162 , G06T2207/10024 , G06T2207/10028 , G06T2207/30201 , H04N7/18 , H04N13/239 , H04N13/271
Abstract: A method for operating an image processing device coupled to a color camera and a depth camera is provided. The method includes receiving a color image of a 3-dimensional scene from a color camera, receiving a depth map of the 3-dimensional scene from a depth camera, generating an aligned 3-dimensional face mesh from a plurality of color images received from the color camera indicating movement of a subject's head within the 3-dimensional scene and form the depth map, determining a head region based the depth map, segmenting the head region into a plurality of facial sections based on both the color image, depth map, and the aligned 3-dimensional face mesh, and overlaying the plurality of facial sections on the color image.
-
公开(公告)号:US20170251176A1
公开(公告)日:2017-08-31
申请号:US15056804
申请日:2016-02-29
Applicant: Microsoft Technology Licensing, LLC
Inventor: Nikolai Smolyanskiy , Zhengyou Zhang , Sean Eron Anderson , Michael Hall
CPC classification number: G02B27/0179 , B60R11/04 , G02B27/017 , G02B2027/0134 , G02B2027/0138 , G02B2027/014 , G02B2027/0178 , G02B2027/0187 , G06T3/005 , H04N5/23203 , H04N5/23238 , H04N7/183 , H04N13/00 , H04N13/111 , H04N13/243 , H04N13/344 , H04N19/00 , H04N21/41422 , H04N21/4223
Abstract: A vehicle camera system captures and transmits video to a user device, which includes a viewing device for playback of the captured video, such as virtual reality or augmented reality glasses. A rendering map is generated that indicates which pixels of the video frame (as identified by particular coordinates of the video frame) correspond to which coordinates of a virtual sphere in which a portion of the video frame is rendered for display. When a video frame is received, the rendering map is used to determine the texture values (e.g., colors) for coordinates in the virtual sphere, which is used to generate the display for the user. This technique reduces the rendering time when a user turns his or her head (e.g., while in virtual reality) and so it reduces motion and/or virtual reality sickness induced by the rendering lag.
-
-
-
-