Video inpainting with deep internal learning

    公开(公告)号:US11055828B2

    公开(公告)日:2021-07-06

    申请号:US16407915

    申请日:2019-05-09

    Applicant: Adobe Inc.

    Abstract: Techniques of inpainting video content include training a neural network to perform an inpainting operation on a video using only content from that video. For example, upon receiving video content including a sequence of initial frames, a computer generates a sequence of inputs corresponding to at least some of the sequence of initial frames and each input including, for example, a uniform noise map. The computer then generates a convolutional neural network (CNN) using the sequence of input as the initial layer. The parameters of the CNN are adjusted according to a cost function, which has components including a flow generation loss component and a consistency loss component. The CNN then outputs, on a final layer, estimated image values in a sequence of final frames.

    INTERPRETABLE USER MODELING FROM UNSTRUCTURED USER DATA

    公开(公告)号:US20200382612A1

    公开(公告)日:2020-12-03

    申请号:US16424949

    申请日:2019-05-29

    Applicant: ADOBE INC.

    Abstract: Methods and systems are provided for generating interpretable user modeling system. The interpretable user modeling system can use an intent neural network to implement one or more tasks. The intent neural network can bridge a semantic gap between log data and human language by leveraging tutorial data to understand user logs in a semantically meaningful way. A memory unit of the intent neural network can capture information from the tutorial data. Such a memory unit can be queried to identify human readable sentences related to actions received by the intent neural network. The human readable sentences can be used to interpret the user log data in a semantically meaningful way.

    High resolution style transfer
    54.
    发明授权

    公开(公告)号:US10650495B2

    公开(公告)日:2020-05-12

    申请号:US15997386

    申请日:2018-06-04

    Applicant: Adobe Inc.

    Abstract: High resolution style transfer techniques and systems are described that overcome the challenges of transferring high resolution style features from one image to another image, and of the limited availability of training data to perform high resolution style transfer. In an example, a neural network is trained using high resolution style features which are extracted from a style image and are used in conjunction with an input image to apply the style features to the input image to generate a version of the input image transformed using the high resolution style features.

    Deep high-resolution style synthesis

    公开(公告)号:US10482639B2

    公开(公告)日:2019-11-19

    申请号:US15438147

    申请日:2017-02-21

    Applicant: Adobe Inc.

    Abstract: In some embodiments, techniques for synthesizing an image style based on a plurality of neural networks are described. A computer system selects a style image based on user input that identifies the style image. The computer system generates an image based on a generator neural network and a loss neural network. The generator neural network outputs the synthesized image based on a noise vector and the style image and is trained based on style features generated from the loss neural network. The loss neural network outputs the style features based on a training image. The training image and the style image have a same resolution. The style features are generated at different resolutions of the training image. The computer system provides the synthesized image to a user device in response to the user input.

    Image alignment for burst mode images

    公开(公告)号:US10453204B2

    公开(公告)日:2019-10-22

    申请号:US15676903

    申请日:2017-08-14

    Applicant: Adobe Inc.

    Abstract: The present disclosure is directed towards systems and methods for generating a new aligned image from a plurality of burst image. The systems and methods subdivide a reference image into a plurality of local regions and a subsequent image into a plurality of corresponding local regions. Additionally, the systems and methods detect a plurality of feature points in each of the reference image and the subsequent image and determine matching feature point pairs between the reference image and the subsequent image. Based on the matching feature point pairs, the systems and methods determine at least one homography of the reference image to the subsequent image. Based on the homography, the systems and methods generate a new aligned image that is that is pixel-wise aligned to the reference image. Furthermore, the systems and methods refines boundaries between local regions of the new aligned image.

    Transferring motion between consecutive frames to a digital image

    公开(公告)号:US10445921B1

    公开(公告)日:2019-10-15

    申请号:US16007898

    申请日:2018-06-13

    Applicant: Adobe Inc.

    Abstract: Transferring motion between consecutive frames to a digital image is leveraged in a digital medium environment. A digital image and at least a portion of the digital video are exposed to a motion transfer model. The portion of the digital video includes at least a first digital video frame and a second digital video frame that is consecutive to the first digital video frame. Flow data between the first digital video frame and the second digital image frame is extracted, and the flow data is then processed to generate motion features representing motion between the first digital video frame and the second digital video frame. The digital image is processed to generate image features of the digital image. Motion of the digital video is then transferred to the digital image by combining the motion features with the image features to generate a next digital image frame for the digital image.

    Oil painting stroke simulation using neural network

    公开(公告)号:US10424086B2

    公开(公告)日:2019-09-24

    申请号:US15814751

    申请日:2017-11-16

    Applicant: Adobe Inc.

    Abstract: Oil painting simulation techniques are disclosed which simulate painting brush strokes using a trained neural network. In some examples, a method may include inferring a new height map of existing paint on a canvas after a new painting brush stroke is applied based on a bristle trajectory map that represents the new painting brush stroke and a height map of existing paint on the canvas prior to the application of the new painting brush stroke, and generating a rendering of the new painting brush stroke based on the new height map of existing paint on the canvas after the new painting brush stroke is applied to the canvas and a color map.

    Font replacement based on visual similarity

    公开(公告)号:US10380462B2

    公开(公告)日:2019-08-13

    申请号:US16013791

    申请日:2018-06-20

    Applicant: Adobe Inc.

    Abstract: Font replacement based on visual similarity is described. In one or more embodiments, a font descriptor includes multiple font features derived from a visual appearance of a font by a font visual similarity model. The font visual similarity model can be trained using a machine learning system that recognizes similarity between visual appearances of two different fonts. A source computing device embeds a font descriptor in a document, which is transmitted to a destination computing device. The destination compares the embedded font descriptor to font descriptors corresponding to local fonts. Based on distances between the embedded and the local font descriptors, at least one matching font descriptor is determined. The local font corresponding to the matching font descriptor is deemed similar to the original font. The destination computing device controls presentations of the document using the similar local font. Computation of font descriptors can be outsourced to a remote location.

Patent Agency Ranking