Invention Application
- Patent Title: METHODS, SYSTEMS, AND MEDIA FOR RELIGHTING IMAGES USING PREDICTED DEEP REFLECTANCE FIELDS
-
Application No.: US16616235Application Date: 2019-10-16
-
Publication No.: US20200372284A1Publication Date: 2020-11-26
- Inventor: Christoph Rhemann , Abhimitra Meka , Matthew Whalen , Jessica Lynn Busch , Sofien Bouaziz , Geoffrey Douglas Harvey , Andrea Tagliasacchi , Jonathan Taylor , Paul Debevec , Peter Joseph Denny , Sean Ryan Francesco Fanello , Graham Fyffe , Jason Angelo Dourgarian , Xueming Yu , Adarsh Prakash Murthy Kowdle , Julien Pascal Christophe Valentin , Peter Christopher Lincoln , Rohit Kumar Pandey , Christian Häne , Shahram Izadi
- Applicant: Google LLC
- International Application: PCT/US2019/056532 WO 20191016
- Main IPC: G06K9/46
- IPC: G06K9/46 ; G06T15/50 ; G06K9/62 ; G06N3/08 ; G06T15/20

Abstract:
Methods, systems, and media for relighting images using predicted deep reflectance fields are provided. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample includes (i) a group of one-light-at-a-time (OLAT) images that have each been captured when one light of a plurality of lights arranged on a lighting structure has been activated, (ii) a group of spherical color gradient images that have each been captured when the plurality of lights arranged on the lighting structure have been activated to each emit a particular color, and (iii) a lighting direction, wherein each image in the group of OLAT images and each of the spherical color gradient images are an image of a subject, and wherein the lighting direction indicates a relative orientation of a light to the subject; training a convolutional neural network using the group of training samples, wherein training the convolutional neural network comprises: for each training iteration in a series of training iterations and for each training sample in the group of training samples: generating an output predicted image, wherein the output predicted image is a representation of the subject associated with the training sample with lighting from the lighting direction associated with the training sample; identifying a ground-truth OLAT image included in the group of OLAT images for the training sample that corresponds to the lighting direction for the training sample; calculating a loss that indicates a perceptual difference between the output predicted image and the identified ground-truth OLAT image; and updating parameters of the convolutional neural network based on the calculated loss; identifying a test sample that includes a second group of spherical color gradient images and a second lighting direction; and generating a relit image of the subject included in each of the second group of spherical color gradient images with lighting from the second lighting direction using the trained convolutional neural network.
Public/Granted literature
- US10997457B2 Methods, systems, and media for relighting images using predicted deep reflectance fields Public/Granted day:2021-05-04
Information query