INPAINTING DIGITAL IMAGES USING A HYBRID WIRE REMOVAL PIPELINE

    公开(公告)号:US20240303787A1

    公开(公告)日:2024-09-12

    申请号:US18179855

    申请日:2023-03-07

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for inpainting a digital image using a hybrid wire removal pipeline. For example, the disclosed systems use a hybrid wire removal pipeline that integrates multiple machine learning models, such as a wire segmentation model, a hole separation model, a mask dilation model, a patch-based inpainting model, and a deep inpainting model. Using the hybrid wire removal pipeline, in some embodiments, the disclosed systems generate a wire segmentation from a digital image depicting one or more wires. The disclosed systems also utilize the hybrid wire removal pipeline to extract or identify portions of the wire segmentation that indicate specific wires or portions of wires. In certain embodiments, the disclosed systems further inpaint pixels of the digital image corresponding to the wires indicated by the wire segmentation mask using the patch-based inpainting model and/or the deep inpainting model.

    IMAGE GENERATION USING A DIFFUSION MODEL
    163.
    发明公开

    公开(公告)号:US20240135610A1

    公开(公告)日:2024-04-25

    申请号:US18169444

    申请日:2023-02-15

    Applicant: ADOBE INC.

    CPC classification number: G06T11/60 G06T5/002 G06T2200/24

    Abstract: Systems and methods for image generation are provided. An aspect of the systems and methods for image generation includes obtaining an original image depicting an element and a target prompt describing a modification to the element. The system may then compute a first output and a second output using a diffusion model. The first output is based on a description of the element and the second output is based on the target prompt. The system then computes a difference between the first output and the second output, and generates a modified image including the modification to the element of the original image based on the difference.

    DEFORMABLE NEURAL RADIANCE FIELD FOR EDITING FACIAL POSE AND FACIAL EXPRESSION IN NEURAL 3D SCENES

    公开(公告)号:US20240062495A1

    公开(公告)日:2024-02-22

    申请号:US17892097

    申请日:2022-08-21

    Applicant: Adobe Inc.

    CPC classification number: G06T19/20 G06T17/00 G06T2200/08 G06T2219/2021

    Abstract: A scene modeling system receives a video including a plurality of frames corresponding to views of an object and a request to display an editable three-dimensional (3D) scene that corresponds to a particular frame of the plurality of frames. The scene modeling system applies a scene representation model to the particular frame, and includes a deformation model configured to generate, for each pixel of the particular frame based on a pose and an expression of the object, a deformation point using a 3D morphable model (3DMM) guided deformation field. The scene representation model includes a color model configured to determine, for the deformation point, color and volume density values. The scene modeling system receives a modification to one or more of the pose or the expression of the object including a modification to a location of the deformation point and renders an updated video based on the received modification.

    Corrective Lighting for Video Inpainting
    165.
    发明公开

    公开(公告)号:US20240046430A1

    公开(公告)日:2024-02-08

    申请号:US18375187

    申请日:2023-09-29

    Applicant: Adobe Inc.

    CPC classification number: G06T5/005 G06T7/269 G06T2207/10016

    Abstract: One or more processing devices access a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The one or more processing devices determine that a target pixel corresponds to a sub-region within the target region that includes hallucinated content. The one or more processing devices determine gradient constraints using gradient values of neighboring pixels in the hallucinated content, the neighboring pixels being adjacent to the target pixel and corresponding to four cardinal directions. The one or more processing devices update color data of the target pixel subject to the determined gradient constraints.

    Few-shot image generation via self-adaptation

    公开(公告)号:US11880957B2

    公开(公告)日:2024-01-23

    申请号:US17013332

    申请日:2020-09-04

    Applicant: Adobe Inc.

    CPC classification number: G06T3/0056 G06N20/00 G06T11/00 G06T2207/20081

    Abstract: One example method involves operations for receiving a request to transform an input image into a target image. Operations further include providing the input image to a machine learning model trained to adapt images. Training the machine learning model includes accessing training data having a source domain of images and a target domain of images with a target style. Training further includes using a pre-trained generative model to generate an adapted source domain of adapted images having the target style. The adapted source domain is generated by determining a rate of change for parameters of the target style, generating weighted parameters by applying a weight to each of the parameters based on their respective rate of change, and applying the weighted parameters to the source domain. Additionally, operations include using the machine learning model to generate the target image by modifying parameters of the input image using the target style.

    GENERATING MODIFIED DIGITAL IMAGES VIA IMAGE INPAINTING USING MULTI-GUIDED PATCH MATCH AND INTELLIGENT CURATION

    公开(公告)号:US20230385992A1

    公开(公告)日:2023-11-30

    申请号:US17664991

    申请日:2022-05-25

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that implement an inpainting framework having computer-implemented machine learning models to generate high-resolution inpainting results. For instance, in one or more embodiments, the disclosed systems generate an inpainted digital image utilizing a deep inpainting neural network from a digital image having a replacement region. The disclosed systems further generate, utilizing a visual guide algorithm, at least one deep visual guide from the inpainted digital image. Using a patch match model and the at least one deep visual guide, the disclosed systems generate a plurality of modified digital images from the digital image by replacing the region of pixels of the digital image with replacement pixels. Additionally, the disclosed systems select, utilizing an inpainting curation model, a modified digital image from the plurality of modified digital images to provide to a client device.

    Corrective lighting for video inpainting

    公开(公告)号:US11823357B2

    公开(公告)日:2023-11-21

    申请号:US17196581

    申请日:2021-03-09

    Applicant: Adobe Inc.

    CPC classification number: G06T5/005 G06T7/269 G06T2207/10016

    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.

    Determining camera parameters from a single digital image

    公开(公告)号:US11810326B2

    公开(公告)日:2023-11-07

    申请号:US17387207

    申请日:2021-07-28

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.

Patent Agency Ranking