NEURAL FACE EDITING WITH INTRINSIC IMAGE DISENTANGLING

    公开(公告)号:US20200090389A1

    公开(公告)日:2020-03-19

    申请号:US16676733

    申请日:2019-11-07

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.

    Image patch matching using probabilistic sampling based on an oracle

    公开(公告)号:US10546212B2

    公开(公告)日:2020-01-28

    申请号:US16148166

    申请日:2018-10-01

    Applicant: Adobe Inc.

    Abstract: The present disclosure is directed toward systems and methods for image patch matching. In particular, the systems and methods described herein sample image patches to identify those image patches that match a target image patch. The systems and methods described herein probabilistically accept image patch proposals as potential matches based on an oracle. The oracle is computationally inexpensive to evaluate but more approximate than similarity heuristics. The systems and methods use the oracle to quickly guide the search to areas of the search space more likely to have a match. Once areas are identified that likely include a match, the systems and methods use a more accurate similarity function to identify patch matches.

    Texture modeling of image data
    24.
    发明授权

    公开(公告)号:US10467777B2

    公开(公告)日:2019-11-05

    申请号:US15934629

    申请日:2018-03-23

    Applicant: Adobe Inc.

    Abstract: Texture modeling techniques for image data are described. In one or more implementations, texels in image data are discovered by one or more computing devices, each texel representing an element that repeats to form a texture pattern in the image data. Regularity of the texels in the image data is modeled by the one or more computing devices to define translations and at least one other transformation of texels in relation to each other.

    Generating differentiable procedural materials

    公开(公告)号:US12198231B2

    公开(公告)日:2025-01-14

    申请号:US18341618

    申请日:2023-06-26

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.

    POINT-BASED NEURAL RADIANCE FIELD FOR THREE DIMENSIONAL SCENE REPRESENTATION

    公开(公告)号:US20240013477A1

    公开(公告)日:2024-01-11

    申请号:US17861199

    申请日:2022-07-09

    Applicant: Adobe Inc.

    CPC classification number: G06T15/205 G06T15/80 G06T15/06 G06T2207/10028

    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.

    GENERATING THREE-DIMENSIONAL REPRESENTATIONS FOR DIGITAL OBJECTS UTILIZING MESH-BASED THIN VOLUMES

    公开(公告)号:US20230360327A1

    公开(公告)日:2023-11-09

    申请号:US17661878

    申请日:2022-05-03

    Applicant: Adobe Inc.

    CPC classification number: G06T17/205 G06T13/20 G06T2210/21

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate three-dimensional hybrid mesh-volumetric representations for digital objects. For instance, in one or more embodiments, the disclosed systems generate a mesh for a digital object from a plurality of digital images that portray the digital object using a multi-view stereo model. Additionally, the disclosed systems determine a set of sample points for a thin volume around the mesh. Using a neural network, the disclosed systems further generate a three-dimensional hybrid mesh-volumetric representation for the digital object utilizing the set of sample points for the thin volume and the mesh.

    Kernel prediction with kernel dictionary in image denoising

    公开(公告)号:US11783184B2

    公开(公告)日:2023-10-10

    申请号:US17590995

    申请日:2022-02-02

    Applicant: Adobe Inc.

    CPC classification number: G06N3/08 G06N20/10 G06T5/002 G06T15/50

    Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.

    Generating differentiable procedural materials

    公开(公告)号:US11688109B2

    公开(公告)日:2023-06-27

    申请号:US17513747

    申请日:2021-10-28

    Applicant: Adobe Inc.

    CPC classification number: G06T11/001 G06N3/084 G06T11/40 G06T15/04

    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.

Patent Agency Ranking