-
公开(公告)号:US11094083B2
公开(公告)日:2021-08-17
申请号:US16257495
申请日:2019-01-25
Applicant: Adobe Inc.
Inventor: Jonathan Eisenmann , Wenqi Xian , Matthew Fisher , Geoffrey Oxholm , Elya Shechtman
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
-
公开(公告)号:US11081139B2
公开(公告)日:2021-08-03
申请号:US16378906
申请日:2019-04-09
Applicant: Adobe Inc.
Inventor: Geoffrey Oxholm , Oliver Wang , Elya Shechtman , Michal Lukac , Ramiz Sheikh
Abstract: Certain aspects involve video inpainting via confidence-weighted motion estimation. For instance, a video editor accesses video content having a target region to be modified in one or more video frames. The video editor computes a motion for a boundary of the target region. The video editor interpolates, from the boundary motion, a target motion of a target pixel within the target region. In the interpolation, confidence values assigned to boundary pixels control how the motion of these pixels contributes to the interpolated target motion. A confidence value is computed based on a difference between forward and reverse motion with respect to a particular boundary pixel, a texture in a region that includes the particular boundary pixel, or a combination thereof. The video editor modifies the target region in the video by updating color data of the target pixel to correspond to the target motion interpolated from the boundary motion.
-
13.
公开(公告)号:US20200242804A1
公开(公告)日:2020-07-30
申请号:US16257495
申请日:2019-01-25
Applicant: Adobe Inc.
Inventor: Jonathan Eisenmann , Wenqi Xian , Matthew Fisher , Geoffrey Oxholm , Elya Shechtman
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
-
公开(公告)号:US10565757B2
公开(公告)日:2020-02-18
申请号:US15619114
申请日:2017-06-09
Applicant: Adobe Inc.
Inventor: Geoffrey Oxholm , Xin Wang
Abstract: A computing system transforms an input image into a stylized output image by applying first and second style features from a style exemplar. The input image is provided to a multimodal style-transfer network having a low-resolution-based stylization subnet and a high-resolution stylization subnet. The low-resolution-based stylization subnet is trained with low-resolution style exemplars to apply the first style feature. The high-resolution stylization subnet is trained with high-resolution style exemplars to apply the second style feature. The low-resolution-based stylization subnet generates an intermediate image by applying the first style feature from a low-resolution version of the style exemplar to first image data obtained from the input image. Second image data from the intermediate image is provided to the high-resolution stylization subnet. The high-resolution stylization subnet generates the stylized output image by applying the second style feature from a high-resolution version of the style exemplar to the second image data.
-
公开(公告)号:US10204656B1
公开(公告)日:2019-02-12
申请号:US15661546
申请日:2017-07-27
Applicant: ADOBE INC.
Inventor: Geoffrey Oxholm , Elya Shechtman , Oliver Wang
Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.
-
-
-
-