-
公开(公告)号:US10269177B2
公开(公告)日:2019-04-23
申请号:US15616634
申请日:2017-06-07
Applicant: Google Inc.
Inventor: Christian Frueh , Vivek Kwatra , Avneesh Sud
IPC: G06F3/01 , G06K9/00 , G06T7/73 , G02B27/01 , G06F17/30 , G06T15/04 , G06T15/40 , G06T17/00 , G06T17/20 , G06T19/00 , G06T19/20 , G06T7/246
Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
-
公开(公告)号:US20190013047A1
公开(公告)日:2019-01-10
申请号:US14675408
申请日:2015-03-31
Applicant: Google Inc.
Inventor: Arthur Wait , Krishna Bharat , Caroline Rebecca Pantofaru , Christian Frueh , Matthias Grundmann , Jay Yagnik , Ryan Michael Hickman
Abstract: A plurality of videos is analyzed (in real time or after the videos are generated) to identify interesting portions of the videos. The interesting portions are identified based on one or more of the people depicted in the videos, the objects depicted in the videos, the motion of objects and/or people in the videos, and the locations where people depicted in the videos are looking. The interesting portions are combined to generate a content item.
-
公开(公告)号:US20180101984A1
公开(公告)日:2018-04-12
申请号:US15616604
申请日:2017-06-07
Applicant: Google Inc.
Inventor: Christian Frueh , Vivek Kwatra , Avneesh Sud
CPC classification number: G06T17/205 , G02B27/0172 , G06F3/013 , G06F16/51 , G06F16/5838 , G06K9/00255 , G06K9/00288 , G06K9/00604 , G06T7/248 , G06T7/74 , G06T15/04 , G06T15/40 , G06T17/00 , G06T19/006 , G06T19/20 , G06T2200/04 , G06T2200/08 , G06T2207/10028 , G06T2207/30201 , G06T2207/30204 , G06T2219/2004 , G06T2219/2021
Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
-
公开(公告)号:US20150163402A1
公开(公告)日:2015-06-11
申请号:US14101606
申请日:2013-12-10
Applicant: Google Inc.
Inventor: Christian Frueh , Ken Conley , Sumit Jain
CPC classification number: G06T15/205 , G06T7/246 , G06T2207/10016 , H04N5/2627
Abstract: Methods and an apparatus for centering swivel views are disclosed. An example method involves a computing device identifying movement of a pixel location of a 3D object within a sequence of images. Each image of the sequence of images may correspond to a view of the 3D object from a different angular orientation. Based on the identified movement of the pixel location of the 3D object, the computing device may estimate movement parameters of at least one function that describes a location of the 3D object in an individual image. The computing device may also determine for one or more images of the sequence of images a respective modification to the image using the estimated parameters of the at least one function. And the computing device may adjust the pixel location of the 3D object within the one or more images based on the respective modification for the image.
Abstract translation: 公开了一种用于对中旋转视图的方法和装置。 示例性方法涉及计算设备,其识别图像序列内的3D对象的像素位置的移动。 图像序列的每个图像可以对应于来自不同角度取向的3D对象的视图。 基于所识别出的3D对象的像素位置的移动,计算设备可以估计在单个图像中描述3D对象的位置的至少一个功能的移动参数。 计算设备还可以使用至少一个功能的估计参数来确定图像序列的一个或多个图像对图像的相应修改。 并且计算设备可以基于对于图像的相应修改来调整一个或多个图像内的3D对象的像素位置。
-
公开(公告)号:US20180101227A1
公开(公告)日:2018-04-12
申请号:US15616634
申请日:2017-06-07
Applicant: Google Inc.
Inventor: Christian Frueh , Vivek Kwatra , Avneesh Sud
CPC classification number: G06T17/205 , G02B27/0172 , G06F3/013 , G06F17/3025 , G06F17/30256 , G06F17/3028 , G06K9/00255 , G06K9/00288 , G06K9/00604 , G06T7/248 , G06T7/74 , G06T15/04 , G06T15/40 , G06T17/00 , G06T19/006 , G06T19/20 , G06T2200/04 , G06T2200/08 , G06T2207/10028 , G06T2207/30201 , G06T2207/30204 , G06T2219/2004 , G06T2219/2021
Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
-
公开(公告)号:US09870621B1
公开(公告)日:2018-01-16
申请号:US14644023
申请日:2015-03-10
Applicant: Google Inc.
Inventor: Christian Frueh , Caroline Rebecca Pantofaru
CPC classification number: G06T7/204 , G06K9/00335 , G06K9/6202 , G06K9/6215 , G06T7/0044 , G06T7/20 , G06T7/246 , G06T2207/10016 , G06T2207/20076 , G06T2207/30201
Abstract: A system and method are disclosed for identifying feature correspondences among a plurality of video clips of a dynamic scene. In one implementation, a computer system identifies a first feature in a first video clip of a dynamic scene that is captured by a first video camera, and a second feature in a second video clip of the dynamic scene that is captured by a second video camera. The computer system determines, based on motion in the first video clip and motion in the second video clip, that the first feature and the second feature do not correspond to a common entity.
-
公开(公告)号:US20180101989A1
公开(公告)日:2018-04-12
申请号:US15616619
申请日:2017-06-07
Applicant: Google Inc.
Inventor: Christian Frueh , VIvek Kwatra , Aveneesh Sud
CPC classification number: G06T17/205 , G02B27/0172 , G06F3/013 , G06F16/51 , G06F16/5838 , G06K9/00255 , G06K9/00288 , G06K9/00604 , G06T7/248 , G06T7/74 , G06T15/04 , G06T15/40 , G06T17/00 , G06T19/006 , G06T19/20 , G06T2200/04 , G06T2200/08 , G06T2207/10028 , G06T2207/30201 , G06T2207/30204 , G06T2219/2004 , G06T2219/2021
Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
-
公开(公告)号:US09449426B2
公开(公告)日:2016-09-20
申请号:US14101606
申请日:2013-12-10
Applicant: Google Inc.
Inventor: Christian Frueh , Ken Conley , Sumit Jain
CPC classification number: G06T15/205 , G06T7/246 , G06T2207/10016 , H04N5/2627
Abstract: Methods and an apparatus for centering swivel views are disclosed. An example method involves a computing device identifying movement of a pixel location of a 3D object within a sequence of images. Each image of the sequence of images may correspond to a view of the 3D object from a different angular orientation. Based on the identified movement of the pixel location of the 3D object, the computing device may estimate movement parameters of at least one function that describes a location of the 3D object in an individual image. The computing device may also determine for one or more images of the sequence of images a respective modification to the image using the estimated parameters of the at least one function. And the computing device may adjust the pixel location of the 3D object within the one or more images based on the respective modification for the image.
Abstract translation: 公开了一种用于对中旋转视图的方法和装置。 示例性方法涉及计算设备,其识别图像序列内的3D对象的像素位置的移动。 图像序列的每个图像可以对应于来自不同角度取向的3D对象的视图。 基于所识别出的3D对象的像素位置的移动,计算设备可以估计在单个图像中描述3D对象的位置的至少一个功能的移动参数。 计算设备还可以使用至少一个功能的估计参数来确定图像序列的一个或多个图像对图像的相应修改。 并且计算设备可以基于对于图像的相应修改来调整一个或多个图像内的3D对象的像素位置。
-
9.
公开(公告)号:US09445047B1
公开(公告)日:2016-09-13
申请号:US14220721
申请日:2014-03-20
Applicant: Google Inc.
Inventor: Christian Frueh , Krishna Bharat , Jay Yagnik
CPC classification number: H04N7/15 , G06K9/00597 , G06K9/3233
Abstract: A method and system include identifying, by a processing device, at least one media clip captured by at least one camera for an event, detecting at least one human object in the at least one media clip, and calculating, by the processing device, a region in the at least one media clip containing a focus of attention of the detected human object.
Abstract translation: 方法和系统包括由处理设备识别由至少一个摄像机捕获的用于事件的至少一个媒体剪辑,检测至少一个媒体剪辑中的至少一个人物,并且由处理设备计算一个 所述至少一个媒体剪辑中的区域包含所检测到的人类对象的关注焦点。
-
公开(公告)号:US10580145B1
公开(公告)日:2020-03-03
申请号:US15838166
申请日:2017-12-11
Applicant: Google Inc.
Inventor: Christian Frueh , Caroline Rebecca Pantofaru
Abstract: A system and method are disclosed for motion-based feature correspondence. A method may include detecting a first motion of a first feature across two or more first frames of a first video clip captured by a first video camera and a second motion of a second feature across two or more second frames of a second video clip captured by a second video camera. The method may further include determining, based on the first motion in the first video clip and the second motion in the second video clip, that the first feature and the second feature correspond to a common entity, the first motion in the first video clip and the second motion in the second video clip corresponding to one or more common points in time in the first video clip and the second video clip.
-
-
-
-
-
-
-
-
-