Enhanced video shot matching using generative adversarial networks

    公开(公告)号:US11158090B2

    公开(公告)日:2021-10-26

    申请号:US16692503

    申请日:2019-11-22

    Applicant: Adobe Inc.

    Abstract: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.

    Automatic Digital Parameter Adjustment Including Tone and Color Correction

    公开(公告)号:US20210160466A1

    公开(公告)日:2021-05-27

    申请号:US16696160

    申请日:2019-11-26

    Applicant: Adobe Inc.

    Abstract: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.

    Object animation using generative neural networks

    公开(公告)号:US10977549B2

    公开(公告)日:2021-04-13

    申请号:US16276559

    申请日:2019-02-14

    Applicant: Adobe Inc.

    Abstract: In implementations of object animation using generative neural networks, one or more computing devices of a system implement an animation system for reproducing animation of an object in a digital video. A mesh of the object is obtained from a first frame of the digital video and a second frame of the digital video having the object is selected. Features of the object from the second frame are mapped to vertices of the mesh, and the mesh is warped based on the mapping. The warped mesh is rendered as an image by a neural renderer and compared to the object from the second frame to train a neural network. The rendered image is then refined by a generator of a generative adversarial network which includes a discriminator. The discriminator trains the generator to reproduce the object from the second frame as the refined image.

    Controlling smoothness of a transition between images

    公开(公告)号:US10915991B2

    公开(公告)日:2021-02-09

    申请号:US16509263

    申请日:2019-07-11

    Applicant: ADOBE INC.

    Abstract: Embodiments described herein are directed to methods and systems for facilitating control of smoothness of transitions between images. In embodiments, a difference of color values of pixels between a foreground image and the background image are identified along a boundary associated with a location at which to paste the foreground image relative to the background image. Thereafter, recursive down sampling of a region of pixels within the boundary by a sampling factor is performed to produce a plurality of down sampled images having color difference indicators associated with each pixel of the down sampled images. Such color difference indicators indicate whether a difference of color value exists for the corresponding pixel. To effectuate a seamless transition, the color difference indicators are normalized in association with each recursively down sampled image.

    Texture interpolation using neural networks

    公开(公告)号:US10818043B1

    公开(公告)日:2020-10-27

    申请号:US16392968

    申请日:2019-04-24

    Applicant: Adobe Inc.

    Abstract: An example method for neural network based interpolation of image textures includes training a global encoder network to generate global latent vectors based on training texture images, and training a local encoder network to generate local latent tensors based on the training texture images. The example method further includes interpolating between the global latent vectors associated with each set of training images, and interpolating between the local latent tensors associated with each set of training images. The example method further includes training a decoder network to generate reconstructions of the training texture images and to generate an interpolated texture based on the interpolated global latent vectors and the interpolated local latent tensors. The training of the encoder and decoder networks is based on a minimization of a loss function of the reconstructions and a minimization of a loss function of the interpolated texture.

    IMAGE COMPOSITES USING A GENERATIVE NEURAL NETWORK

    公开(公告)号:US20200302251A1

    公开(公告)日:2020-09-24

    申请号:US16897068

    申请日:2020-06-09

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to an image composite system that employs a generative adversarial network to generate realistic composite images. For example, in one or more embodiments, the image composite system trains a geometric prediction neural network using an adversarial discrimination neural network to learn warp parameters that provide correct geometric alignment of foreground objects with respect to a background image. Once trained, the determined warp parameters provide realistic geometric corrections to foreground objects such that the warped foreground objects appear to blend into background images naturally when composited together.

    Interactive system for automatically synthesizing a content-aware fill

    公开(公告)号:US10706509B2

    公开(公告)日:2020-07-07

    申请号:US15921447

    申请日:2018-03-14

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for automatically synthesizing a content-aware fill using similarity transformed patches. A user interface receives a user-specified hole and a user-specified sampling region, both of which may be stored in a constraint mask. A brush tool can be used to interactively brush the sampling region and modify the constraint mask. The mask is passed to a patch-based synthesizer configured to synthesize the fill using similarity transformed patches sampled from the sampling region. Fill properties such as similarity transform parameters can be set to control the manner in which the fill is synthesized. A live preview can be provided with gradual updates of the synthesized fill prior to completion. Once a fill has been synthesized, the user interface presents the original image, replacing the hole with the synthesized fill.

    Environmental map generation from a digital image

    公开(公告)号:US10650490B2

    公开(公告)日:2020-05-12

    申请号:US16379496

    申请日:2019-04-09

    Applicant: Adobe Inc.

    Abstract: Environmental map generation techniques and systems are described. A digital image is scaled to achieve a target aspect ratio using a content aware scaling technique. A canvas is generated that is dimensionally larger than the scaled digital image and the scaled digital image is inserted within the canvas thereby resulting in an unfilled portion of the canvas. An initially filled canvas is then generated by filling the unfilled portion using a content aware fill technique based on the inserted digital image. A plurality of polar coordinate canvases is formed by transforming original coordinates of the canvas into polar coordinates. The unfilled portions of the polar coordinate canvases are filled using a content-aware fill technique that is initialized based on the initially filled canvas. An environmental map of the digital image is generated by combining a plurality of original coordinate canvas portions formed from the polar coordinate canvases.

    Image modification using detected symmetry

    公开(公告)号:US10573040B2

    公开(公告)日:2020-02-25

    申请号:US15346638

    申请日:2016-11-08

    Applicant: Adobe Inc.

    Abstract: Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map. The image modification module produces a manipulated image by manipulating the original image under global symmetry constraints imposed by the global symmetry association map.

    DEEP PATCH FEATURE PREDICTION FOR IMAGE INPAINTING

    公开(公告)号:US20190295227A1

    公开(公告)日:2019-09-26

    申请号:US15935994

    申请日:2018-03-26

    Applicant: Adobe Inc.

    Abstract: Techniques for using deep learning to facilitate patch-based image inpainting are described. In an example, a computer system hosts a neural network trained to generate, from an image, code vectors including features learned by the neural network and descriptive of patches. The image is received and contains a region of interest (e.g., a hole missing content). The computer system inputs it to the network and, in response, receives the code vectors. Each code vector is associated with a pixel in the image. Rather than comparing RGB values between patches, the computer system compares the code vector of a pixel inside the region to code vectors of pixels outside the region to find the best match based on a feature similarity measure (e.g., a cosine similarity). The pixel value of the pixel inside the region is set based on the pixel value of the matched pixel outside this region.

Patent Agency Ranking