-
公开(公告)号:US10937237B1
公开(公告)日:2021-03-02
申请号:US16816080
申请日:2020-03-11
Applicant: Adobe Inc.
Inventor: Vladimir Kim , Pierre-alain Langlois , Matthew Fisher , Bryan Russell , Oliver Wang
Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for reconstructing three-dimensional object meshes from two-dimensional images of objects using multi-view cycle projection. For example, the disclosed system can determine a multi-view cycle projection loss across a plurality of images of an object via an estimated three-dimensional object mesh of the object. For example, the disclosed system uses a pixel mapping neural network to project a sampled pixel location across a plurality of images of an object and via a three-dimensional mesh representing the object. The disclosed system determines a multi-view cycle consistency loss based on a difference between the sampled pixel location and a cycle projection of the sampled pixel location and uses the loss to update the pixel mapping neural network, a latent vector representing the object, or a shape generation neural network that uses the latent vector to generate the object mesh.
-
公开(公告)号:US20200382755A1
公开(公告)日:2020-12-03
申请号:US16428201
申请日:2019-05-31
Applicant: Adobe Inc.
Inventor: Stephen DiVerdi , Seth Walker , Oliver Wang , Cuong Nguyen
IPC: H04N13/111 , H04N13/383 , H04N13/282
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.
-
公开(公告)号:US10755173B2
公开(公告)日:2020-08-25
申请号:US16703368
申请日:2019-12-04
Applicant: ADOBE INC.
Inventor: Oliver Wang , Jue Wang , Shuochen Su
Abstract: Methods and systems are provided for deblurring images. A neural network is trained where the training includes selecting a central training image from a sequence of blurred images. An earlier training image and a later training image are selected based on the earlier training image preceding the central training image in the sequence and the later training image following the central training image in the sequence and based on proximity of the images to the central training image in the sequence. A training output image is generated by the neural network from the central training image, the earlier training image, and the later training image. Similarity is evaluated between the training output image and a reference image. The neural network is modified based on the evaluated similarity. The trained neural network is used to generate a deblurred output image from a blurry input image.
-
公开(公告)号:US10289951B2
公开(公告)日:2019-05-14
申请号:US15341875
申请日:2016-11-02
Applicant: ADOBE INC.
Inventor: Oliver Wang , Jue Wang , Shuochen Su
Abstract: Methods and systems are provided for deblurring images. A neural network is trained where the training includes selecting a central training image from a sequence of blurred images. An earlier training image and a later training image are selected based on the earlier training image preceding the central training image in the sequence and the later training image following the central training image in the sequence and based on proximity of the images to the central training image in the sequence. A training output image is generated by the neural network from the central training image, the earlier training image, and the later training image. Similarity is evaluated between the training output image and a reference image. The neural network is modified based on the evaluated similarity. The trained neural network is used to generate a deblurred output image from a blurry input image.
-
公开(公告)号:US10204656B1
公开(公告)日:2019-02-12
申请号:US15661546
申请日:2017-07-27
Applicant: ADOBE INC.
Inventor: Geoffrey Oxholm , Elya Shechtman , Oliver Wang
Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.
-
-
-
-