Deep novel view and lighting synthesis from sparse images

    公开(公告)号:US10950037B2

    公开(公告)日:2021-03-16

    申请号:US16510586

    申请日:2019-07-12

    Applicant: ADOBE INC.

    Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.

    DEEP NOVEL VIEW AND LIGHTING SYNTHESIS FROM SPARSE IMAGES

    公开(公告)号:US20210012561A1

    公开(公告)日:2021-01-14

    申请号:US16510586

    申请日:2019-07-12

    Applicant: ADOBE INC.

    Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.

    Material capture using imaging
    14.
    发明授权

    公开(公告)号:US10818022B2

    公开(公告)日:2020-10-27

    申请号:US16229759

    申请日:2018-12-21

    Applicant: ADOBE INC.

    Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.

    Neural face editing with intrinsic image disentangling

    公开(公告)号:US10565758B2

    公开(公告)日:2020-02-18

    申请号:US15622711

    申请日:2017-06-14

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.

    UTILIZING AN OBJECT RELIGHTING NEURAL NETWORK TO GENERATE DIGITAL IMAGES ILLUMINATED FROM A TARGET LIGHTING DIRECTION

    公开(公告)号:US20190340810A1

    公开(公告)日:2019-11-07

    申请号:US15970367

    申请日:2018-05-03

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.

    Image lighting transfer via multi-dimensional histogram matching

    公开(公告)号:US10521892B2

    公开(公告)日:2019-12-31

    申请号:US15253655

    申请日:2016-08-31

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed at relighting a target image based on a lighting effect from a reference image. In one embodiment, a target image and a reference image are received, the reference image includes a lighting effect desired to be applied to the target image. A lighting transfer is performed using color data and geometrical data associated with the reference image and color data and geometrical data associated with the target image. The lighting transfer causes generation of a relit image that corresponds with the target image having a lighting effect of the reference image. The relit image is provided for display to a user via one or more output devices. Other embodiments may be described and/or claimed.

Patent Agency Ranking