-
1.
公开(公告)号:US10997457B2
公开(公告)日:2021-05-04
申请号:US16616235
申请日:2019-10-16
Applicant: Google LLC
Inventor: Christoph Rhemann , Abhimitra Meka , Matthew Whalen , Jessica Lynn Busch , Sofien Bouaziz , Geoffrey Douglas Harvey , Andrea Tagliasacchi , Jonathan Taylor , Paul Debevec , Peter Joseph Denny , Sean Ryan Francesco Fanello , Graham Fyffe , Jason Angelo Dourgarian , Xueming Yu , Adarsh Prakash Murthy Kowdle , Julien Pascal Christophe Valentin , Peter Christopher Lincoln , Rohit Kumar Pandey , Christian Häne , Shahram Izadi
Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample includes (i) a group of one-light-at-a-time (OLAT) images that have each been captured when one light of a plurality of lights arranged on a lighting structure has been activated, (ii) a group of spherical color gradient images that have each been captured when the plurality of lights arranged on the lighting structure have been activated to each emit a particular color, and (iii) a lighting direction, wherein each image in the group of OLAT images and each of the spherical color gradient images are an image of a subject, and wherein the lighting direction indicates a relative orientation of a light to the subject; training a convolutional neural network using the group of training samples, wherein training the convolutional neural network comprises: for each training iteration in a series of training iterations and for each training sample in the group of training samples: generating an output predicted image, wherein the output predicted image is a representation of the subject associated with the training sample with lighting from the lighting direction associated with the training sample; identifying a ground-truth OLAT image included in the group of OLAT images for the training sample that corresponds to the lighting direction for the training sample; calculating a loss that indicates a perceptual difference between the output predicted image and the identified ground-truth OLAT image; and updating parameters of the convolutional neural network based on the calculated loss; identifying a test sample that includes a second group of spherical color gradient images and a second lighting direction; and generating a relit image of the subject included in each of the second group of spherical color gradient images with lighting from the second lighting direction using the trained convolutional neural network.
-
公开(公告)号:US20210368157A1
公开(公告)日:2021-11-25
申请号:US17325818
申请日:2021-05-20
Applicant: Google LLC
Inventor: Ryan Overbeck , Michael Joseph Broxton , John Flynn , Daniel William Erickson , Lars Peter Johannes Hedman , Matthew Nowicki DuVall , Jason Angelo Dourgarian , Jessica Lynn Busch , Matthew Stephen Whalen , Paul Debevec
IPC: H04N13/282 , H04N5/232 , H04N19/177 , H04N13/15 , H04N13/271 , H04N13/194 , H04N13/172 , H04N5/247 , H04N13/161
Abstract: Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.
-
3.
公开(公告)号:US20230206511A1
公开(公告)日:2023-06-29
申请号:US18117675
申请日:2023-03-06
Applicant: Google LLC
Inventor: Ryan Overbeck , Michael Joseph Broxton , John Flynn , Daniel William Erickson , Lars Peter Johannes Hedman , Matthew Nowicki DuVall , Jason Angelo Dourgarian , Jessica Lynn Busch , Matthew Stephen Whalen , Paul Debevec
CPC classification number: G06T9/001 , G06T17/20 , G06T15/04 , G06T7/50 , G06T2207/20021 , G06T2207/10024
Abstract: Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.
-
公开(公告)号:US11601636B2
公开(公告)日:2023-03-07
申请号:US17325818
申请日:2021-05-20
Applicant: Google LLC
Inventor: Ryan Overbeck , Michael Joseph Broxton , John Flynn , Daniel William Erickson , Lars Peter Johannes Hedman , Matthew Nowicki DuVall , Jason Angelo Dourgarian , Jessica Lynn Busch , Matthew Stephen Whalen , Paul Debevec
IPC: G06T15/08 , H04N13/282 , H04N5/232 , H04N19/177 , H04N13/15 , H04N13/161 , H04N13/194 , H04N13/172 , H04N5/247 , H04N13/271 , G06T15/20
Abstract: Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.
-
5.
公开(公告)号:US20200372284A1
公开(公告)日:2020-11-26
申请号:US16616235
申请日:2019-10-16
Applicant: Google LLC
Inventor: Christoph Rhemann , Abhimitra Meka , Matthew Whalen , Jessica Lynn Busch , Sofien Bouaziz , Geoffrey Douglas Harvey , Andrea Tagliasacchi , Jonathan Taylor , Paul Debevec , Peter Joseph Denny , Sean Ryan Francesco Fanello , Graham Fyffe , Jason Angelo Dourgarian , Xueming Yu , Adarsh Prakash Murthy Kowdle , Julien Pascal Christophe Valentin , Peter Christopher Lincoln , Rohit Kumar Pandey , Christian Häne , Shahram Izadi
Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample includes (i) a group of one-light-at-a-time (OLAT) images that have each been captured when one light of a plurality of lights arranged on a lighting structure has been activated, (ii) a group of spherical color gradient images that have each been captured when the plurality of lights arranged on the lighting structure have been activated to each emit a particular color, and (iii) a lighting direction, wherein each image in the group of OLAT images and each of the spherical color gradient images are an image of a subject, and wherein the lighting direction indicates a relative orientation of a light to the subject; training a convolutional neural network using the group of training samples, wherein training the convolutional neural network comprises: for each training iteration in a series of training iterations and for each training sample in the group of training samples: generating an output predicted image, wherein the output predicted image is a representation of the subject associated with the training sample with lighting from the lighting direction associated with the training sample; identifying a ground-truth OLAT image included in the group of OLAT images for the training sample that corresponds to the lighting direction for the training sample; calculating a loss that indicates a perceptual difference between the output predicted image and the identified ground-truth OLAT image; and updating parameters of the convolutional neural network based on the calculated loss; identifying a test sample that includes a second group of spherical color gradient images and a second lighting direction; and generating a relit image of the subject included in each of the second group of spherical color gradient images with lighting from the second lighting direction using the trained convolutional neural network.
-
-
-
-