-
公开(公告)号:US10950037B2
公开(公告)日:2021-03-16
申请号:US16510586
申请日:2019-07-12
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Zexiang Xu , Sunil Hadap
Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.
-
公开(公告)号:US20210073955A1
公开(公告)日:2021-03-11
申请号:US16564398
申请日:2019-09-09
Applicant: ADOBE INC.
Inventor: Jinsong Zhang , Kalyan K. Sunkavalli , Yannick Hold-Geoffroy , Sunil Hadap , Jonathan Eisenmann , Jean-Francois Lalonde
IPC: G06T5/00
Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.
-
公开(公告)号:US20210012561A1
公开(公告)日:2021-01-14
申请号:US16510586
申请日:2019-07-12
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Zexiang Xu , Sunil Hadap
Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.
-
公开(公告)号:US10818022B2
公开(公告)日:2020-10-27
申请号:US16229759
申请日:2018-12-21
Applicant: ADOBE INC.
Inventor: Kalyan Krishna Sunkavalli , Sunil Hadap , Joon-Young Lee , Zhuo Hui
Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.
-
公开(公告)号:US10565758B2
公开(公告)日:2020-02-18
申请号:US15622711
申请日:2017-06-14
Applicant: Adobe Inc.
Inventor: Sunil Hadap , Elya Shechtman , Zhixin Shu , Kalyan Sunkavalli , Mehmet Yumer
Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
-
16.
公开(公告)号:US20190340810A1
公开(公告)日:2019-11-07
申请号:US15970367
申请日:2018-05-03
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Zexiang Xu , Sunil Hadap
Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
-
公开(公告)号:US11682126B2
公开(公告)日:2023-06-20
申请号:US17080812
申请日:2020-10-26
Applicant: ADOBE INC.
Inventor: Kalyan Krishna Sunkavalli , Sunil Hadap , Joon-Young Lee , Zhuo Hui
CPC classification number: G06T7/49 , G01N21/55 , G06T3/0068 , G06T7/40 , G06T7/60 , G06T7/90 , G06T17/00 , G06T2200/08 , G06T2207/10016 , G06T2207/10152
Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.
-
公开(公告)号:US10957026B1
公开(公告)日:2021-03-23
申请号:US16564398
申请日:2019-09-09
Applicant: ADOBE INC.
Inventor: Jinsong Zhang , Kalyan K. Sunkavalli , Yannick Hold-Geoffroy , Sunil Hadap , Jonathan Eisenmann , Jean-Francois Lalonde
Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.
-
公开(公告)号:US10665011B1
公开(公告)日:2020-05-26
申请号:US16428482
申请日:2019-05-31
Applicant: Adobe Inc. , Université Laval
Inventor: Kalyan Sunkavalli , Sunil Hadap , Nathan Carr , Jean-Francois Lalonde , Mathieu Garon
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to render a virtual object in a digital scene by using a local-lighting-estimation-neural network to analyze both global and local features of the digital scene and generate location-specific-lighting parameters for a designated position within the digital scene. For example, the disclosed systems extract and combine such global and local features from a digital scene using global network layers and local network layers of the local-lighting-estimation-neural network. In certain implementations, the disclosed systems can generate location-specific-lighting parameters using a neural-network architecture that combines global and local feature vectors to spatially vary lighting for different positions within a digital scene.
-
公开(公告)号:US10521892B2
公开(公告)日:2019-12-31
申请号:US15253655
申请日:2016-08-31
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Sunil Hadap , Elya Shechtman , Zhixin Shu
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed at relighting a target image based on a lighting effect from a reference image. In one embodiment, a target image and a reference image are received, the reference image includes a lighting effect desired to be applied to the target image. A lighting transfer is performed using color data and geometrical data associated with the reference image and color data and geometrical data associated with the target image. The lighting transfer causes generation of a relit image that corresponds with the target image having a lighting effect of the reference image. The relit image is provided for display to a user via one or more output devices. Other embodiments may be described and/or claimed.
-
-
-
-
-
-
-
-
-