-
公开(公告)号:US20200349772A1
公开(公告)日:2020-11-05
申请号:US16861530
申请日:2020-04-29
Applicant: Google LLC
Inventor: Anastasia Tkach , Ricardo Martin Brualla , Shahram Izadi , Shuoran Yang , Cem Keskin , Sean Ryan Francesco Fanello , Philip Davidson , Jonathan Taylor , Rohit Pandey , Andrea Tagliasacchi , Pavlo Pidlypenskyi
Abstract: A method includes receiving a first image including color data and depth data, determining a viewpoint associated with an augmented reality (AR) and/or virtual reality (VR) display displaying a second image, receiving at least one calibration image including an object in the first image, the object being in a different pose as compared to a pose of the object in the first image, and generating the second image based on the first image, the viewpoint and the at least one calibration image.
-
公开(公告)号:US20230154051A1
公开(公告)日:2023-05-18
申请号:US17919460
申请日:2020-04-17
Applicant: Google LLC
Inventor: Danhang Tang , Saurabh Singh , Cem Keskin , Phillip Andrew Chou , Christian Haene , Mingsong Dou , Sean Ryan Francesco Fanello , Jonathan Taylor , Andrea Tagliasacchi , Philip Lindsley Davidson , Yinda Zhang , Onur Gonen Guleryuz , Shahram Izadi , Sofien Bouaziz
IPC: G06T9/00
Abstract: Systems and methods are directed to encoding and/or decoding of the textures/geometry of a three-dimensional volumetric representation. An encoding computing system can obtain voxel blocks from a three-dimensional volumetric representation of an object. The encoding computing system can encode voxel blocks with a machine-learned voxel encoding model to obtain encoded voxel blocks. The encoding computing system can decode the encoded voxel blocks with a machine-learned voxel decoding model to obtain reconstructed voxel blocks. The encoding computing system can generate a reconstructed mesh representation of the object based at least in part on the one or more reconstructed voxel blocks. The encoding computing system can encode textures associated with the voxel blocks according to an encoding scheme and based at least in part on the reconstructed mesh representation of the object to obtain encoded textures.
-
公开(公告)号:US11328486B2
公开(公告)日:2022-05-10
申请号:US16861530
申请日:2020-04-29
Applicant: Google LLC
Inventor: Anastasia Tkach , Ricardo Martin Brualla , Shahram Izadi , Shuoran Yang , Cem Keskin , Sean Ryan Francesco Fanello , Philip Davidson , Jonathan Taylor , Rohit Pandey , Andrea Tagliasacchi , Pavlo Pidlypenskyi
Abstract: A method includes receiving a first image including color data and depth data, determining a viewpoint associated with an augmented reality (AR) and/or virtual reality (VR) display displaying a second image, receiving at least one calibration image including an object in the first image, the object being in a different pose as compared to a pose of the object in the first image, and generating the second image based on the first image, the viewpoint and the at least one calibration image.
-
4.
公开(公告)号:US20200372284A1
公开(公告)日:2020-11-26
申请号:US16616235
申请日:2019-10-16
Applicant: Google LLC
Inventor: Christoph Rhemann , Abhimitra Meka , Matthew Whalen , Jessica Lynn Busch , Sofien Bouaziz , Geoffrey Douglas Harvey , Andrea Tagliasacchi , Jonathan Taylor , Paul Debevec , Peter Joseph Denny , Sean Ryan Francesco Fanello , Graham Fyffe , Jason Angelo Dourgarian , Xueming Yu , Adarsh Prakash Murthy Kowdle , Julien Pascal Christophe Valentin , Peter Christopher Lincoln , Rohit Kumar Pandey , Christian Häne , Shahram Izadi
Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample includes (i) a group of one-light-at-a-time (OLAT) images that have each been captured when one light of a plurality of lights arranged on a lighting structure has been activated, (ii) a group of spherical color gradient images that have each been captured when the plurality of lights arranged on the lighting structure have been activated to each emit a particular color, and (iii) a lighting direction, wherein each image in the group of OLAT images and each of the spherical color gradient images are an image of a subject, and wherein the lighting direction indicates a relative orientation of a light to the subject; training a convolutional neural network using the group of training samples, wherein training the convolutional neural network comprises: for each training iteration in a series of training iterations and for each training sample in the group of training samples: generating an output predicted image, wherein the output predicted image is a representation of the subject associated with the training sample with lighting from the lighting direction associated with the training sample; identifying a ground-truth OLAT image included in the group of OLAT images for the training sample that corresponds to the lighting direction for the training sample; calculating a loss that indicates a perceptual difference between the output predicted image and the identified ground-truth OLAT image; and updating parameters of the convolutional neural network based on the calculated loss; identifying a test sample that includes a second group of spherical color gradient images and a second lighting direction; and generating a relit image of the subject included in each of the second group of spherical color gradient images with lighting from the second lighting direction using the trained convolutional neural network.
-
公开(公告)号:US20220014723A1
公开(公告)日:2022-01-13
申请号:US17309440
申请日:2019-12-02
Applicant: Google LLC
Inventor: Rohit Pandey , Jonathan Taylor , Ricardo Martin Brualla , Shuoran Yang , Pavlo Pidlypenskyi , Daniel Goldman , Sean Ryan Francesco Fanello
IPC: H04N13/111 , H04N13/161 , H04N13/366 , H04N13/243 , H04N13/332 , G06T7/174 , G06K9/32 , G06K9/00 , G06K9/46
Abstract: Three-dimensional (3D) performance capture and machine learning can be used to re-render high quality novel viewpoints of a captured scene. A textured 3D reconstruction is first rendered to a novel viewpoint. Due to imperfections in geometry and low-resolution texture, the 2D rendered image contains artifacts and is low quality. Accordingly, a deep learning technique is disclosed that takes these images as input and generates more visually enhanced re-rendering. The system is specifically designed for VR and AR headsets, and accounts for consistency between two stereo views.
-
6.
公开(公告)号:US10997457B2
公开(公告)日:2021-05-04
申请号:US16616235
申请日:2019-10-16
Applicant: Google LLC
Inventor: Christoph Rhemann , Abhimitra Meka , Matthew Whalen , Jessica Lynn Busch , Sofien Bouaziz , Geoffrey Douglas Harvey , Andrea Tagliasacchi , Jonathan Taylor , Paul Debevec , Peter Joseph Denny , Sean Ryan Francesco Fanello , Graham Fyffe , Jason Angelo Dourgarian , Xueming Yu , Adarsh Prakash Murthy Kowdle , Julien Pascal Christophe Valentin , Peter Christopher Lincoln , Rohit Kumar Pandey , Christian Häne , Shahram Izadi
Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample includes (i) a group of one-light-at-a-time (OLAT) images that have each been captured when one light of a plurality of lights arranged on a lighting structure has been activated, (ii) a group of spherical color gradient images that have each been captured when the plurality of lights arranged on the lighting structure have been activated to each emit a particular color, and (iii) a lighting direction, wherein each image in the group of OLAT images and each of the spherical color gradient images are an image of a subject, and wherein the lighting direction indicates a relative orientation of a light to the subject; training a convolutional neural network using the group of training samples, wherein training the convolutional neural network comprises: for each training iteration in a series of training iterations and for each training sample in the group of training samples: generating an output predicted image, wherein the output predicted image is a representation of the subject associated with the training sample with lighting from the lighting direction associated with the training sample; identifying a ground-truth OLAT image included in the group of OLAT images for the training sample that corresponds to the lighting direction for the training sample; calculating a loss that indicates a perceptual difference between the output predicted image and the identified ground-truth OLAT image; and updating parameters of the convolutional neural network based on the calculated loss; identifying a test sample that includes a second group of spherical color gradient images and a second lighting direction; and generating a relit image of the subject included in each of the second group of spherical color gradient images with lighting from the second lighting direction using the trained convolutional neural network.
-
-
-
-
-