Video inpainting via confidence-weighted motion estimation

    公开(公告)号:US11081139B2

    公开(公告)日:2021-08-03

    申请号:US16378906

    申请日:2019-04-09

    Applicant: Adobe Inc.

    Abstract: Certain aspects involve video inpainting via confidence-weighted motion estimation. For instance, a video editor accesses video content having a target region to be modified in one or more video frames. The video editor computes a motion for a boundary of the target region. The video editor interpolates, from the boundary motion, a target motion of a target pixel within the target region. In the interpolation, confidence values assigned to boundary pixels control how the motion of these pixels contributes to the interpolated target motion. A confidence value is computed based on a difference between forward and reverse motion with respect to a particular boundary pixel, a texture in a region that includes the particular boundary pixel, or a combination thereof. The video editor modifies the target region in the video by updating color data of the target pixel to correspond to the target motion interpolated from the boundary motion.

    Multimodal style-transfer network for applying style features from multi-resolution style exemplars to input images

    公开(公告)号:US10565757B2

    公开(公告)日:2020-02-18

    申请号:US15619114

    申请日:2017-06-09

    Applicant: Adobe Inc.

    Abstract: A computing system transforms an input image into a stylized output image by applying first and second style features from a style exemplar. The input image is provided to a multimodal style-transfer network having a low-resolution-based stylization subnet and a high-resolution stylization subnet. The low-resolution-based stylization subnet is trained with low-resolution style exemplars to apply the first style feature. The high-resolution stylization subnet is trained with high-resolution style exemplars to apply the second style feature. The low-resolution-based stylization subnet generates an intermediate image by applying the first style feature from a low-resolution version of the style exemplar to first image data obtained from the input image. Second image data from the intermediate image is provided to the high-resolution stylization subnet. The high-resolution stylization subnet generates the stylized output image by applying the second style feature from a high-resolution version of the style exemplar to the second image data.

    Video processing architectures which provide looping video

    公开(公告)号:US10204656B1

    公开(公告)日:2019-02-12

    申请号:US15661546

    申请日:2017-07-27

    Applicant: ADOBE INC.

    Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.

Patent Agency Ranking