DETERMINING CAMERA PARAMETERS FROM A SINGLE DIGITAL IMAGE

    公开(公告)号:US20210358170A1

    公开(公告)日:2021-11-18

    申请号:US17387207

    申请日:2021-07-28

    申请人: Adobe Inc.

    IPC分类号: G06T7/80 G06T7/12 G06T7/13

    摘要: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.

    VIDEO INPAINTING VIA MACHINE-LEARNING MODELS WITH MOTION CONSTRAINTS

    公开(公告)号:US20210287007A1

    公开(公告)日:2021-09-16

    申请号:US16817100

    申请日:2020-03-12

    申请人: Adobe Inc.

    IPC分类号: G06K9/00 G06N20/00 G06K9/46

    摘要: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.

    CORRECTIVE LIGHTING FOR VIDEO INPAINTING

    公开(公告)号:US20220292649A1

    公开(公告)日:2022-09-15

    申请号:US17196581

    申请日:2021-03-09

    申请人: Adobe Inc.

    IPC分类号: G06T5/00 G06T7/269

    摘要: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.

    ADVERSARIALLY ROBUST VISUAL FINGERPRINTING AND IMAGE PROVENANCE MODELS

    公开(公告)号:US20230222762A1

    公开(公告)日:2023-07-13

    申请号:US17573041

    申请日:2022-01-11

    申请人: Adobe Inc.

    摘要: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize a deep visual fingerprinting model with parameters learned from robust contrastive learning to identify matching digital images and image provenance information. For example, the disclosed systems utilize an efficient learning procedure that leverages training on bounded adversarial examples to more accurately identify digital images (including adversarial images) with a small computational overhead. To illustrate, the disclosed systems utilize a first objective function that iteratively identifies augmentations to increase contrastive loss. Moreover, the disclosed systems utilize a second objective function that iteratively learns parameters of a deep visual fingerprinting model to reduce the contrastive loss. With these learned parameters, the disclosed systems utilize the deep visual fingerprinting model to generate visual fingerprints for digital images, retrieve and match digital images, and provide digital image provenance information.

    Video processing architectures which provide looping video

    公开(公告)号:US10453491B2

    公开(公告)日:2019-10-22

    申请号:US16273981

    申请日:2019-02-12

    申请人: Adobe Inc.

    摘要: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.

    Corrective Lighting for Video Inpainting
    7.
    发明公开

    公开(公告)号:US20240046430A1

    公开(公告)日:2024-02-08

    申请号:US18375187

    申请日:2023-09-29

    申请人: Adobe Inc.

    IPC分类号: G06T5/00 G06T7/269

    摘要: One or more processing devices access a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The one or more processing devices determine that a target pixel corresponds to a sub-region within the target region that includes hallucinated content. The one or more processing devices determine gradient constraints using gradient values of neighboring pixels in the hallucinated content, the neighboring pixels being adjacent to the target pixel and corresponding to four cardinal directions. The one or more processing devices update color data of the target pixel subject to the determined gradient constraints.

    Corrective lighting for video inpainting

    公开(公告)号:US11823357B2

    公开(公告)日:2023-11-21

    申请号:US17196581

    申请日:2021-03-09

    申请人: Adobe Inc.

    IPC分类号: G06T5/00 G06T7/269

    摘要: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.

    Determining camera parameters from a single digital image

    公开(公告)号:US11810326B2

    公开(公告)日:2023-11-07

    申请号:US17387207

    申请日:2021-07-28

    申请人: Adobe Inc.

    IPC分类号: G06T7/80 G06T7/12 G06T7/13

    摘要: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.