Deep novel view and lighting synthesis from sparse images

    公开(公告)号:US10950037B2

    公开(公告)日:2021-03-16

    申请号:US16510586

    申请日:2019-07-12

    Applicant: ADOBE INC.

    Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.

    DEEP NOVEL VIEW AND LIGHTING SYNTHESIS FROM SPARSE IMAGES

    公开(公告)号:US20210012561A1

    公开(公告)日:2021-01-14

    申请号:US16510586

    申请日:2019-07-12

    Applicant: ADOBE INC.

    Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.

    UTILIZING AN OBJECT RELIGHTING NEURAL NETWORK TO GENERATE DIGITAL IMAGES ILLUMINATED FROM A TARGET LIGHTING DIRECTION

    公开(公告)号:US20190340810A1

    公开(公告)日:2019-11-07

    申请号:US15970367

    申请日:2018-05-03

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.

Patent Agency Ranking