Many-to-Many Splatting-based Digital Image Synthesis

    公开(公告)号:US20230325968A1

    公开(公告)日:2023-10-12

    申请号:US17714356

    申请日:2022-04-06

    Applicant: Adobe Inc.

    Abstract: Digital synthesis techniques are described to synthesize a digital image at a target time between a first digital image and a second digital image. To begin, an optical flow generation module is employed to generate optical flows. The digital images and optical flows are then received as an input by a motion refinement system. The motion refinement system is configured to generate data describing many-to-many relationships mapped for pixels in the plurality of digital images and reliability scores of the many-to-many relationships. The reliability scores are then used to resolve overlaps of pixels that are mapped to a same location by a synthesis module to generate a synthesized digital image.

    UTILIZING MACHINE LEARNING MODELS TO GENERATE REFINED DEPTH MAPS WITH SEGMENTATION MASK GUIDANCE

    公开(公告)号:US20230326028A1

    公开(公告)日:2023-10-12

    申请号:US17658873

    申请日:2022-04-12

    Applicant: Adobe Inc.

    CPC classification number: G06T7/11 G06T2207/20084 G06T7/50 G06T7/215

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning models to generate refined depth maps of digital images utilizing digital segmentation masks. In particular, in one or more embodiments, the disclosed systems generate a depth map for a digital image utilizing a depth estimation machine learning model, determine a digital segmentation mask for the digital image, and generate a refined depth map from the depth map and the digital segmentation mask utilizing a depth refinement machine learning model. In some embodiments, the disclosed systems generate first and second intermediate depth maps using the digital segmentation mask and an inverse digital segmentation mask and merger the first and second intermediate depth maps to generate the refined depth map.

    VIEW SYNTHESIS OF A DYNAMIC SCENE

    公开(公告)号:US20220301252A1

    公开(公告)日:2022-09-22

    申请号:US17204571

    申请日:2021-03-17

    Applicant: ADOBE INC.

    Abstract: Embodiments of the technology described herein, provide a view and time synthesis of dynamic scenes captured by a camera. The technology described herein represents a dynamic scene as a continuous function of both space and time. The technology may parameterize this function with a deep neural network (a multi-layer perceptron (MLP)), and perform rendering using volume tracing. At a very high level, a dynamic scene depicted in the video may be used to train the MLP. Once trained, the MLP is able to synthesize a view of the scene at a time and/or camera pose not found in the video through prediction. As used herein, a dynamic scene comprises one or more moving objects.

    Splatting-based Digital Image Synthesis
    4.
    发明公开

    公开(公告)号:US20230326044A1

    公开(公告)日:2023-10-12

    申请号:US17714373

    申请日:2022-04-06

    Applicant: Adobe Inc.

    Abstract: Digital image synthesis techniques are described that leverage splatting, i.e., forward warping. In one example, a first digital image and a first optical flow are received by a digital image synthesis system. A first splat metric and a first merge metric are constructed by the digital image synthesis system that defines a weighted map of respective pixels. From this, the digital image synthesis system produces a first warped optical flow and a first warp merge metric corresponding to an interpolation instant by forward warping the first optical flow based on the splat metric and the merge metric. A first warped digital image corresponding to the interpolation instant is formed by the digital image synthesis system by backward warping the first digital image based on the first warped optical flow.

    Reconstructing three-dimensional scenes portrayed in digital images utilizing point cloud machine-learning models

    公开(公告)号:US11443481B1

    公开(公告)日:2022-09-13

    申请号:US17186522

    申请日:2021-02-26

    Applicant: Adobe Inc.

    Abstract: This disclosure describes implementations of a three-dimensional (3D) scene recovery system that reconstructs a 3D scene representation of a scene portrayed in a single digital image. For instance, the 3D scene recovery system trains and utilizes a 3D point cloud model to recover accurate intrinsic camera parameters from a depth map of the digital image. Additionally, the 3D point cloud model may include multiple neural networks that target specific intrinsic camera parameters. For example, the 3D point cloud model may include a depth 3D point cloud neural network that recovers the depth shift as well as include a focal length 3D point cloud neural network that recovers the camera focal length. Further, the 3D scene recovery system may utilize the recovered intrinsic camera parameters to transform the single digital image into an accurate and realistic 3D scene representation, such as a 3D point cloud.

    3D motion effect from a 2D image
    6.
    发明授权

    公开(公告)号:US11017586B2

    公开(公告)日:2021-05-25

    申请号:US16388187

    申请日:2019-04-18

    Applicant: ADOBE INC.

    Abstract: Systems and methods are described for generating a three dimensional (3D) effect from a two dimensional (2D) image. The methods may include generating a depth map based on a 2D image, identifying a camera path, generating one or more extremal views based on the 2D image and the camera path, generating a global point cloud by inpainting occlusion gaps in the one or more extremal views, generating one or more intermediate views based on the global point cloud and the camera path, and combining the one or more extremal views and the one or more intermediate views to produce a 3D motion effect.

    View synthesis of a dynamic scene

    公开(公告)号:US12039657B2

    公开(公告)日:2024-07-16

    申请号:US17204571

    申请日:2021-03-17

    Applicant: ADOBE INC.

    Abstract: Embodiments of the technology described herein, provide a view and time synthesis of dynamic scenes captured by a camera. The technology described herein represents a dynamic scene as a continuous function of both space and time. The technology may parameterize this function with a deep neural network (a multi-layer perceptron (MLP)), and perform rendering using volume tracing. At a very high level, a dynamic scene depicted in the video may be used to train the MLP. Once trained, the MLP is able to synthesize a view of the scene at a time and/or camera pose not found in the video through prediction. As used herein, a dynamic scene comprises one or more moving objects.

    3D MOTION EFFECT FROM A 2D IMAGE
    10.
    发明申请

    公开(公告)号:US20200334894A1

    公开(公告)日:2020-10-22

    申请号:US16388187

    申请日:2019-04-18

    Applicant: Adobe Inc.

    Abstract: Systems and methods are described for generating a three dimensional (3D) effect from a two dimensional (2D) image. The methods may include generating a depth map based on a 2D image, identifying a camera path, generating one or more extremal views based on the 2D image and the camera path, generating a global point cloud by inpainting occlusion gaps in the one or more extremal views, generating one or more intermediate views based on the global point cloud and the camera path, and combining the one or more extremal views and the one or more intermediate views to produce a 3D motion effect.

Patent Agency Ranking