-
公开(公告)号:US10609286B2
公开(公告)日:2020-03-31
申请号:US15621444
申请日:2017-06-13
Applicant: Adobe Inc.
IPC: G06T5/00 , H04N5/232 , G06K9/46 , G06T15/50 , G06N3/08 , H04N5/235 , G06K9/00 , G06K9/62 , G06N3/04
Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.
-
公开(公告)号:US20190164312A1
公开(公告)日:2019-05-30
申请号:US15826331
申请日:2017-11-29
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Yannick Hold-Geoffroy , Sunil Hadap , Matthew David Fisher , Jonathan Eisenmann , Emiliano Gambaretto
CPC classification number: G06T7/80 , G06N3/0454 , G06N3/08 , G06T7/97 , G06T2207/20081 , G06T2207/20084
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters. Once trained, the convolutional neural network can receive a new digital image, and based on detected image characteristics thereof, estimate a corresponding set of camera calibration parameters with a calculated level of confidence.
-
公开(公告)号:US12211225B2
公开(公告)日:2025-01-28
申请号:US17231833
申请日:2021-04-15
Applicant: ADOBE INC.
Inventor: Sai Bi , Zexiang Xu , Kalyan Krishna Sunkavalli , Miloš Hašan , Yannick Hold-Geoffroy , David Jay Kriegman , Ravi Ramamoorthi
Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).
-
公开(公告)号:US12147896B2
公开(公告)日:2024-11-19
申请号:US18296525
申请日:2023-04-06
Applicant: Adobe Inc.
Inventor: Long Mai , Yannick Hold-Geoffroy , Naoto Inoue , Daichi Ito , Brian Lynn Price
Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
-
公开(公告)号:US12008710B2
公开(公告)日:2024-06-11
申请号:US18062460
申请日:2022-12-06
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Yannick Hold-Geoffroy , Christian Gagne , Marc-Andre Gardner , Jean-Francois Lalonde
CPC classification number: G06T15/506 , G06N3/08 , G06T7/50 , G06T7/60 , G06T7/70 , G06T7/90 , G06T2200/24 , G06T2207/20081 , G06T2207/20084
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
-
36.
公开(公告)号:US20240144586A1
公开(公告)日:2024-05-02
申请号:US18304179
申请日:2023-04-20
Applicant: Adobe Inc.
Inventor: Yannick Hold-Geoffroy , Vojtech Krs , Radomir Mech , Nathan Carr , Matheus Gadelha
IPC: G06T15/60
CPC classification number: G06T15/60 , G06T2215/12
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.
-
37.
公开(公告)号:US11972534B2
公开(公告)日:2024-04-30
申请号:US17519841
申请日:2021-11-05
Applicant: Adobe Inc.
IPC: G06T19/20 , G06F18/211 , G06F18/22 , G06N3/02 , G06T15/04
CPC classification number: G06T19/20 , G06F18/211 , G06F18/22 , G06N3/02 , G06T15/04 , G06T2219/2004 , G06T2219/2016
Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.
-
38.
公开(公告)号:US20240127402A1
公开(公告)日:2024-04-18
申请号:US18238290
申请日:2023-08-25
Applicant: Adobe Inc.
Inventor: Mohammad Reza Karimi Dastjerdi , Yannick Hold-Geoffroy , Sai Bi , Jonathan Eisenmann , Jean-François Lalonde
CPC classification number: G06T5/50 , G06T5/002 , G06T15/506 , G06T2200/24 , G06T2207/20081 , G06T2207/20084 , G06T2207/20092 , G06T2207/20208
Abstract: In some examples, a computing system accesses a field of view (FOV) image that has a field of view less than 360 degrees and has low dynamic range (LDR) values. The computing system estimates lighting parameters from a scene depicted in the FOV image and generates a lighting image based on the lighting parameters. The computing system further generates lighting features generated the lighting image and image features generated from the FOV image. These features are aggregated into aggregated features and a machine learning model is applied to the image features and the aggregated features to generate a panorama image having high dynamic range (HDR) values.
-
39.
公开(公告)号:US20230244940A1
公开(公告)日:2023-08-03
申请号:US18296525
申请日:2023-04-06
Applicant: Adobe Inc.
Inventor: Long MAI , Yannick Hold-Geoffroy , Naoto Inoue , Daichi Ito , Brian Lynn Price
CPC classification number: G06N3/08 , G06T15/80 , G06T15/506 , G06T5/50 , G06T2207/20084 , G06T2207/10028 , G06T2207/20081
Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
-
40.
公开(公告)号:US20230141395A1
公开(公告)日:2023-05-11
申请号:US17519841
申请日:2021-11-05
Applicant: Adobe Inc.
CPC classification number: G06T19/20 , G06K9/6215 , G06K9/6228 , G06N3/02 , G06T15/04 , G06T2219/2004 , G06T2219/2016
Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.
-
-
-
-
-
-
-
-
-