Video processing architectures which provide looping video

    公开(公告)号:US10453491B2

    公开(公告)日:2019-10-22

    申请号:US16273981

    申请日:2019-02-12

    Applicant: Adobe Inc.

    Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.

    GENERATING SPATIAL AUDIO USING A PREDICTIVE MODEL

    公开(公告)号:US20190306451A1

    公开(公告)日:2019-10-03

    申请号:US15937349

    申请日:2018-03-27

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve generating and providing spatial audio using a predictive model. For example, a generates, using a predictive model, a visual representation of visual content provideable to a user device by encoding the visual content into the visual representation that indicates a visual element in the visual content. The system generates, using the predictive model, an audio representation of audio associated with the visual content by encoding the audio into the audio representation that indicates an audio element in the audio. The system also generates, using the predictive model, spatial audio based at least in part on the audio element and associating the spatial audio with the visual element. The system can also augment the visual content using the spatial audio by at least associating the spatial audio with the visual content.

    HIGH-RESOLUTION IMAGE GENERATION
    63.
    发明公开

    公开(公告)号:US20240320789A1

    公开(公告)日:2024-09-26

    申请号:US18585957

    申请日:2024-02-23

    Applicant: ADOBE INC.

    CPC classification number: G06T3/4053 G06T3/4046 G06T11/00

    Abstract: A method, non-transitory computer readable medium, apparatus, and system for image generation include obtaining an input image having a first resolution, where the input image includes random noise, and generating a low-resolution image based on the input image, where the low-resolution image has the first resolution. The method, non-transitory computer readable medium, apparatus, and system further include generating a high-resolution image based on the low-resolution image, where the high-resolution image has a second resolution that is greater than the first resolution.

    View synthesis of a dynamic scene
    64.
    发明授权

    公开(公告)号:US12039657B2

    公开(公告)日:2024-07-16

    申请号:US17204571

    申请日:2021-03-17

    Applicant: ADOBE INC.

    Abstract: Embodiments of the technology described herein, provide a view and time synthesis of dynamic scenes captured by a camera. The technology described herein represents a dynamic scene as a continuous function of both space and time. The technology may parameterize this function with a deep neural network (a multi-layer perceptron (MLP)), and perform rendering using volume tracing. At a very high level, a dynamic scene depicted in the video may be used to train the MLP. Once trained, the MLP is able to synthesize a view of the scene at a time and/or camera pose not found in the video through prediction. As used herein, a dynamic scene comprises one or more moving objects.

    Refining image acquisition data through domain adaptation

    公开(公告)号:US11908036B2

    公开(公告)日:2024-02-20

    申请号:US17034467

    申请日:2020-09-28

    Applicant: Adobe Inc.

    Abstract: The technology described herein is directed to a cross-domain training framework that iteratively trains a domain adaptive refinement agent to refine low quality real-world image acquisition data, e.g., depth maps, when accompanied by corresponding conditional data from other modalities, such as the underlying images or video from which the image acquisition data is computed. The cross-domain training framework includes a shared cross-domain encoder and two conditional decoder branch networks, e.g., a synthetic conditional depth prediction branch network and a real conditional depth prediction branch network. The shared cross-domain encoder converts synthetic and real-world image acquisition data into synthetic and real compact feature representations, respectively. The synthetic and real conditional decoder branch networks convert the respective synthetic and real compact feature representations back to synthetic and real image acquisition data (refined versions) conditioned on data from the other modalities. The cross-domain training framework iteratively trains the domain adaptive refinement agent.

    Corrective Lighting for Video Inpainting
    66.
    发明公开

    公开(公告)号:US20240046430A1

    公开(公告)日:2024-02-08

    申请号:US18375187

    申请日:2023-09-29

    Applicant: Adobe Inc.

    CPC classification number: G06T5/005 G06T7/269 G06T2207/10016

    Abstract: One or more processing devices access a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The one or more processing devices determine that a target pixel corresponds to a sub-region within the target region that includes hallucinated content. The one or more processing devices determine gradient constraints using gradient values of neighboring pixels in the hallucinated content, the neighboring pixels being adjacent to the target pixel and corresponding to four cardinal directions. The one or more processing devices update color data of the target pixel subject to the determined gradient constraints.

    Corrective lighting for video inpainting

    公开(公告)号:US11823357B2

    公开(公告)日:2023-11-21

    申请号:US17196581

    申请日:2021-03-09

    Applicant: Adobe Inc.

    CPC classification number: G06T5/005 G06T7/269 G06T2207/10016

    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.

Patent Agency Ranking