Generating procedural materials from digital images

    公开(公告)号:US11189060B2

    公开(公告)日:2021-11-30

    申请号:US16863540

    申请日:2020-04-30

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.

    Extracting material properties from a single image

    公开(公告)号:US10810469B2

    公开(公告)日:2020-10-20

    申请号:US15975329

    申请日:2018-05-09

    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for extracting material properties from a single digital image portraying one or more materials by utilizing a neural network encoder, a neural network material classifier, and one or more neural network material property decoders. In particular, in one or more embodiments, the disclosed systems and methods train the neural network encoder, the neural network material classifier, and one or more neural network material property decoders to accurately extract material properties from a single digital image portraying one or more materials. Furthermore, in one or more embodiments, the disclosed systems and methods train and utilize a rendering layer to generate model images from the extracted material properties.

    Harmonizing composite images utilizing a transformer neural network

    公开(公告)号:US12165284B2

    公开(公告)日:2024-12-10

    申请号:US17655663

    申请日:2022-03-21

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a dual-branched neural network architecture to harmonize composite images. For example, in one or more implementations, the transformer-based harmonization system uses a convolutional branch and a transformer branch to generate a harmonized composite image based on an input composite image and a corresponding segmentation mask. More particularly, the convolutional branch comprises a series of convolutional neural network layers followed by a style normalization layer to extract localized information from the input composite image. Further, the transformer branch comprises a series of transformer neural network layers to extract global information based on different resolutions of the input composite image. Utilizing a decoder, the transformer-based harmonization system combines the local information and the global information from the corresponding convolutional branch and transformer branch to generate a harmonized composite image.

    DEFORMABLE NEURAL RADIANCE FIELD FOR EDITING FACIAL POSE AND FACIAL EXPRESSION IN NEURAL 3D SCENES

    公开(公告)号:US20240062495A1

    公开(公告)日:2024-02-22

    申请号:US17892097

    申请日:2022-08-21

    Applicant: Adobe Inc.

    CPC classification number: G06T19/20 G06T17/00 G06T2200/08 G06T2219/2021

    Abstract: A scene modeling system receives a video including a plurality of frames corresponding to views of an object and a request to display an editable three-dimensional (3D) scene that corresponds to a particular frame of the plurality of frames. The scene modeling system applies a scene representation model to the particular frame, and includes a deformation model configured to generate, for each pixel of the particular frame based on a pose and an expression of the object, a deformation point using a 3D morphable model (3DMM) guided deformation field. The scene representation model includes a color model configured to determine, for the deformation point, color and volume density values. The scene modeling system receives a modification to one or more of the pose or the expression of the object including a modification to a location of the deformation point and renders an updated video based on the received modification.

    Automatic object replacement in an image

    公开(公告)号:US11042990B2

    公开(公告)日:2021-06-22

    申请号:US16176917

    申请日:2018-10-31

    Applicant: Adobe Inc.

    Abstract: Systems and techniques for automatic object replacement in an image include receiving an original image and a preferred image. The original image is automatically segmented into an original image foreground region and an original image object region. The preferred image is automatically segmented into a preferred image foreground region and a preferred image object region. A composite image is automatically composed by replacing the original image object region with the preferred image object region such that the composite image includes the original image foreground region and the preferred image object region. An attribute of the composite image is automatically adjusted.

    RELIGHTING DIGITAL IMAGES ILLUMINATED FROM A TARGET LIGHTING DIRECTION

    公开(公告)号:US20200273237A1

    公开(公告)日:2020-08-27

    申请号:US15930925

    申请日:2020-05-13

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.

    Neural face editing with intrinsic image disentangling

    公开(公告)号:US10692265B2

    公开(公告)日:2020-06-23

    申请号:US16676733

    申请日:2019-11-07

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.

Patent Agency Ranking