Digital Image Completion Using Deep Learning
    11.
    发明申请

    公开(公告)号:US20200184610A1

    公开(公告)日:2020-06-11

    申请号:US16791939

    申请日:2020-02-14

    Applicant: Adobe Inc.

    Abstract: Digital image completion using deep learning is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a framework that combines generative and discriminative neural networks based on learning architecture of the generative adversarial networks. From the holey digital image, the generative neural network generates a filled digital image having hole-filling content in place of holes. The discriminative neural networks detect whether the filled digital image and the hole-filling digital content correspond to or include computer-generated content or are photo-realistic. The generating and detecting are iteratively continued until the discriminative neural networks fail to detect computer-generated content for the filled digital image and hole-filling content or until detection surpasses a threshold difficulty. Responsive to this, the image completer outputs the filled digital image with hole-filling content in place of the holey digital image's holes.

    Editing digital images utilizing a neural network with an in-network rendering layer

    公开(公告)号:US10430978B2

    公开(公告)日:2019-10-01

    申请号:US15448206

    申请日:2017-03-02

    Applicant: Adobe Inc.

    Abstract: The present disclosure includes methods and systems for generating modified digital images utilizing a neural network that includes a rendering layer. In particular, the disclosed systems and methods can train a neural network to decompose an input digital image into intrinsic physical properties (e.g., such as material, illumination, and shape). Moreover, the systems and methods can substitute one of the intrinsic physical properties for a target property (e.g., a modified material, illumination, or shape). The systems and methods can utilize a rendering layer trained to synthesize a digital image to generate a modified digital image based on the target property and the remaining (unsubstituted) intrinsic physical properties. Systems and methods can increase the accuracy of modified digital images by generating modified digital images that realistically reflect a confluence of intrinsic physical properties of an input digital image and target (i.e., modified) properties.

    GENERATING THREE-DIMENSIONAL LOOPING ANIMATIONS FROM STILL IMAGES

    公开(公告)号:US20240428491A1

    公开(公告)日:2024-12-26

    申请号:US18340445

    申请日:2023-06-23

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to a system that utilizes neural networks to generate looping animations from still images. The system fits a 3D model to a pose of a person in a digital image. The system receives a 3D animation sequence that transitions between a starting pose and an ending pose. The system generates, utilizing an animation transition neural network, first and second 3D animation transition sequences that respectively transition between the pose of the person and the starting pose and between the ending pose and the pose of the person. The system modifies each of the 3D animation sequence, the first 3D animation transition sequence, and the second 3D animation transition sequence by applying a texture map. The system generates a looping 3D animation by combining the modified 3D animation sequence, the modified first 3D animation transition sequence, and the modified second 3D animation transition sequence.

    Digital Object Animation
    14.
    发明公开

    公开(公告)号:US20230186544A1

    公开(公告)日:2023-06-15

    申请号:US17550432

    申请日:2021-12-14

    Applicant: Adobe Inc.

    CPC classification number: G06T13/80 G06F3/017 G06T7/33 G06F3/012

    Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.

    INSERTING THREE-DIMENSIONAL OBJECTS INTO DIGITAL IMAGES WITH CONSISTENT LIGHTING VIA GLOBAL AND LOCAL LIGHTING INFORMATION

    公开(公告)号:US20230037591A1

    公开(公告)日:2023-02-09

    申请号:US17383294

    申请日:2021-07-22

    Applicant: Adobe Inc.

    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map. Additionally, the disclosed system generates a modified digital image with the three-dimensional object inserted into the digital image with consistent lighting of the three-dimensional object and the digital image.

    End-to-end relighting of a foreground object of an image

    公开(公告)号:US11380023B2

    公开(公告)日:2022-07-05

    申请号:US16823092

    申请日:2020-03-18

    Applicant: Adobe Inc.

    Abstract: Introduced here are techniques for relighting an image by automatically segmenting a human object in an image. The segmented image is input to an encoder that transforms it into a feature space. The feature space is concatenated with coefficients of a target illumination for the image and input to an albedo decoder and a light transport detector to predict an albedo map and a light transport matrix, respectively. In addition, the output of the encoder is concatenated with outputs of residual parts of each decoder and fed to a light coefficients block, which predicts coefficients of the illumination for the image. The light transport matrix and predicted illumination coefficients are multiplied to obtain a shading map that can sharpen details of the image. Scaling the resulting image by the albedo map to produce the relight image. The relight image can be refined to denoise the relight image.

    Hierarchical scale matching and patch estimation for image style transfer with arbitrary resolution

    公开(公告)号:US11232547B2

    公开(公告)日:2022-01-25

    申请号:US16930736

    申请日:2020-07-16

    Applicant: Adobe Inc.

    Abstract: A style of a digital image is transferred to another digital image of arbitrary resolution. A high-resolution (HR) content image is segmented into several low-resolution (LR) patches. The resolution of a style image is matched to have the same resolution as the LR content image patches. Style transfer is then performed on a patch-by-patch basis using, for example, a pair of feature transforms—whitening and coloring. The patch-by-patch style transfer process is then repeated at several increasing resolutions, or scale levels, of both the content and style images. The results of the style transfer at each scale level are incorporated into successive scale levels up to and including the original HR scale. As a result, style transfer can be performed with images having arbitrary resolutions to produce visually pleasing results with good spatial consistency.

    Motion Retargeting with Kinematic Constraints

    公开(公告)号:US20220020199A1

    公开(公告)日:2022-01-20

    申请号:US17486269

    申请日:2021-09-27

    Applicant: Adobe Inc.

    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.

    Motion retargeting with kinematic constraints

    公开(公告)号:US11170551B1

    公开(公告)日:2021-11-09

    申请号:US16864724

    申请日:2020-05-01

    Applicant: Adobe Inc.

    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.

    Generating novel views of a three-dimensional object based on a single two-dimensional image

    公开(公告)号:US11115645B2

    公开(公告)日:2021-09-07

    申请号:US16230872

    申请日:2018-12-21

    Applicant: ADOBE INC.

    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.

Patent Agency Ranking