-
公开(公告)号:US10254913B2
公开(公告)日:2019-04-09
申请号:US15378218
申请日:2016-12-14
Applicant: Adobe Inc.
Inventor: JaeGwang Lim , Byungmoon Kim , Sunil Hadap
IPC: G06F3/048 , G06F3/0481 , G06T7/13 , G06F3/0484 , G06F3/01
Abstract: Techniques are disclosed for selecting a targeted portion of a digital image. In one embodiment, a selection cursor having central and peripheral regions is provided. The central region is used to force a selection or a deselection, and therefore moving the central region over a portion of the image causes that portion of the image to be selected or deselected, respectively. The peripheral region of the cursor surrounds the central region and defines an area where a localized level set algorithm for boundary detection is performed. This provides more accurate boundary detection within the narrowly-focused peripheral region and eliminates the need to apply the level set algorithm across the entire image. Thus moving the peripheral region of the selection cursor over a boundary of the targeted portion of the image applies the level set algorithm in that boundary region and increases the likelihood that the boundary will be detected accurately.
-
公开(公告)号:US11158117B2
公开(公告)日:2021-10-26
申请号:US16877227
申请日:2020-05-18
Applicant: ADOBE INC.
Inventor: Kalyan Sunkavalli , Sunil Hadap , Nathan Carr , Mathieu Garon
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene. As requests to render a virtual object come in real (or near real) time, the system can quickly generate different location-specific-lighting parameters that accurately reflect lighting conditions at different positions within a digital scene in response to render requests.
-
公开(公告)号:US10964060B2
公开(公告)日:2021-03-30
申请号:US16675641
申请日:2019-11-06
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Yannick Hold-Geoffroy , Sunil Hadap , Matthew David Fisher , Jonathan Eisenmann , Emiliano Gambaretto
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters. Once trained, the convolutional neural network can receive a new digital image, and based on detected image characteristics thereof, estimate a corresponding set of camera calibration parameters with a calculated level of confidence.
-
公开(公告)号:US10692277B1
公开(公告)日:2020-06-23
申请号:US16360901
申请日:2019-03-21
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Sunil Hadap , Nathan Carr , Mathieu Garon
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene. As requests to render a virtual object come in real (or near real) time, the system can quickly generate different location-specific-lighting parameters that accurately reflect lighting conditions at different positions within a digital scene in response to render requests.
-
公开(公告)号:US20200151509A1
公开(公告)日:2020-05-14
申请号:US16188130
申请日:2018-11-12
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Sunil Hadap , Jonathan Eisenmann , Jinsong Zhang , Emiliano Gambaretto
Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate lighting parameters for input images where the input images are synthetic and real low-dynamic range images. Such a neural network system can be trained using differences between a simple scene rendered using the estimated lighting parameters and the same simple scene rendered using known ground-truth lighting parameters. Such a neural network system can also be trained such that the synthetic and real low-dynamic range images are mapped in roughly the same distribution. Such a trained neural network system can be used to input a low-dynamic range image determine high-dynamic range lighting parameters.
-
-
-
-