-
公开(公告)号:US20230401681A1
公开(公告)日:2023-12-14
申请号:US18236583
申请日:2023-08-22
Applicant: Google LLC
Inventor: Tiancheng Sun , Yun-Ta Tsai , Jonathan Barron
IPC: G06T5/00
CPC classification number: G06T5/008 , G06T15/506
Abstract: Apparatus and methods related to applying lighting models to images of objects are provided. A neural network can be trained to apply a lighting model to an input image. The training of the neural network can utilize confidence learning that is based on light predictions and prediction confidence values associated with lighting of the input image. A computing device can receive an input image of an object and data about a particular lighting model to be applied to the input image. The computing device can determine an output image of the object by using the trained neural network to apply the particular lighting model to the input image of the object.
-
公开(公告)号:US11776095B2
公开(公告)日:2023-10-03
申请号:US17260364
申请日:2019-04-01
Applicant: GOOGLE LLC
Inventor: Tiancheng Sun , Yun-ta Tsai , Jonathan Barron
CPC classification number: G06T5/008 , G06T15/506 , G06T2207/20081 , G06T2207/20084
Abstract: Apparatus and methods related to applying lighting models to images of objects are provided. A neural network can be trained to apply a lighting model to an input image. The training of the neural network can utilize confidence learning that is based on light predictions and prediction confidence values associated with lighting of the input image. A computing device can receive an input image of an object and data about a particular lighting model to be applied to the input image. The computing device can determine an output image of the object by using the trained neural network to apply the particular lighting model to the input image of the object.
-
公开(公告)号:US20210183089A1
公开(公告)日:2021-06-17
申请号:US16759808
申请日:2017-11-03
Applicant: Google LLC
Inventor: Neal Wadhwa , Jonathan Barron , Rahul Garg , Pratul Srinivasan
Abstract: Example embodiments allow for training of artificial neural networks (ANNs) to generate depth maps based on images. The ANNs are trained based on a plurality of sets of images, where each set of images represents a single scene and the images in such a set of images differ with respect to image aperture and/or focal distance. An untrained ANN generates a depth map based on one or more images in a set of images. This depth map is used to generate, using the image(s) in the set, a predicted image that corresponds, with respect to image aperture and/or focal distance, to one of the images in the set. Differences between the predicted image and the corresponding image are used to update the ANN. ANNs tramed in this manner are especially suited for generating depth maps used to perform simulated image blur on small-aperture images.)
-
公开(公告)号:US20200051225A1
公开(公告)日:2020-02-13
申请号:US16342911
申请日:2017-11-14
Applicant: Google LLC
Inventor: Jonathan Barron , Yun-Ta Tsai
Abstract: Methods for white-balancing images are provided. These methods include determining, for an input image, a chrominance histogram for the pixels of the input image. The determined histogram is a toroidal chrominance histogram, with an underlying, toroidal chrominance space that corresponds to a wrapped version, of a standard flat chrominance space. The toroidal chrominance histogram is- then convolved with a fitter to generate a two-dimensional heat map that is then used to determine art estimated chrominance of i|lummaiioB present id the input image; This can include fitting a bivariate von Mises distribution, or some other circular and/or toroidal, probability distribution, to the determined two-dimensional heat map. These methods for estimating illumination chrominance values for input images have reduced computational costs and increased speed relative to other methods for determining image illuminant chrominance values,
-
公开(公告)号:US12136203B2
公开(公告)日:2024-11-05
申请号:US18236583
申请日:2023-08-22
Applicant: Google LLC
Inventor: Tiancheng Sun , Yun-Ta Tsai , Jonathan Barron
Abstract: Apparatus and methods related to applying lighting models to images of objects are provided. A neural network can be trained to apply a lighting model to an input image. The training of the neural network can utilize confidence learning that is based on light predictions and prediction confidence values associated with lighting of the input image. A computing device can receive an input image of an object and data about a particular lighting model to be applied to the input image. The computing device can determine an output image of the object by using the trained neural network to apply the particular lighting model to the input image of the object.
-
公开(公告)号:US11113832B2
公开(公告)日:2021-09-07
申请号:US16759808
申请日:2017-11-03
Applicant: Google LLC
Inventor: Neal Wadhwa , Jonathan Barron , Rahul Garg , Pratul Srinivasan
Abstract: Example embodiments allow for training of artificial neural networks (ANNs) to generate depth maps based on images. The ANNs are trained based on a plurality of sets of images, where each set of images represents a single scene and the images in such a set of images differ with respect to image aperture and/or focal distance. An untrained ANN generates a depth map based on one or more images in a set of images. This depth map is used to generate, using the image(s) in the set, a predicted image that corresponds, with respect to image aperture and/or focal distance, to one of the images in the set. Differences between the predicted image and the corresponding image are used to update the ANN. ANNs tramed in this manner are especially suited for generating depth maps used to perform simulated image blur on small-aperture images.
-
公开(公告)号:US11039122B2
公开(公告)日:2021-06-15
申请号:US16120666
申请日:2018-09-04
Applicant: Google LLC
Inventor: Tianfan Xue , Jian Wang , Jiawen Chen , Jonathan Barron
IPC: H04N13/25 , H04N13/254 , H04N5/235
Abstract: Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.
-
公开(公告)号:US10949958B2
公开(公告)日:2021-03-16
申请号:US16342911
申请日:2017-11-14
Applicant: Google LLC
Inventor: Jonathan Barron , Yun-Ta Tsai
Abstract: Methods for white-balancing images are provided. These methods include determining, for an input image, a chrominance histogram for the pixels of the input image. The determined histogram is a toroidal chrominance histogram, with an underlying, toroidal chrominance space that corresponds to a wrapped version, of a standard flat chrominance space. The toroidal chrominance histogram is- then convolved with a fitter to generate a two-dimensional heat map that is then used to determine art estimated chrominance of i|lummaiioB present id the input image; This can include fitting a bivariate von Mises distribution, or some other circular and/or toroidal, probability distribution, to the determined two-dimensional heat map. These methods for estimating illumination chrominance values for input images have reduced computational costs and increased speed relative to other methods for determining image illuminant chrominance values.
-
公开(公告)号:US20190188535A1
公开(公告)日:2019-06-20
申请号:US15843345
申请日:2017-12-15
Applicant: Google LLC
Inventor: Jiawen Chen , Samuel Hasinoff , Michael Gharbi , Jonathan Barron
CPC classification number: G06K9/6262 , G06K9/66 , G06T3/0006 , G06T3/4046 , G06T5/00 , G06T5/001 , G06T2207/20081 , G06T2207/20084 , H04N5/23293
Abstract: Systems and methods described herein may relate to image transformation utilizing a plurality of deep neural networks. An example method includes receiving, at a mobile device, a plurality of image processing parameters. The method also includes causing an image sensor of the mobile device to capture an initial image and receiving, at a coefficient prediction neural network at the mobile device, an input image based on the initial image. The method further includes determining, using the coefficient prediction neural network, an image transformation model based on the input image and at least a portion of the plurality of image processing parameters. The method additionally includes receiving, at a rendering neural network at the mobile device, the initial image and the image transformation model. Yet further, the method includes generating, by the rendering neural network, a rendered image based on the initial image, according to the image transformation model.
-
公开(公告)号:US11490070B2
公开(公告)日:2022-11-01
申请号:US17322216
申请日:2021-05-17
Applicant: Google LLC
Inventor: Tianfan Xue , Jian Wang , Jiawen Chen , Jonathan Barron
IPC: H04N13/25 , H04N13/254 , H04N5/235
Abstract: Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.
-
-
-
-
-
-
-
-
-