CONTROLLABLE DIFFUSION MODEL
    1.
    发明申请

    公开(公告)号:US20250078349A1

    公开(公告)日:2025-03-06

    申请号:US18459526

    申请日:2023-09-01

    Applicant: ADOBE INC.

    Abstract: A method, apparatus, and non-transitory computer readable medium for image generation are described. Embodiments of the present disclosure obtain a content input and a style input via a user interface or from a database. The content input includes a target spatial layout and the style input includes a target style. A content encoder of an image processing apparatus encodes the content input to obtain a spatial layout mask representing the target spatial layout. A style encoder of the image processing apparatus encodes the style input to obtain a style embedding representing the target style. An image generation model of the image processing apparatus generates an image based on the spatial layout mask and the style embedding, where the image includes the target spatial layout and the target style.

    ENCODING IMAGE VALUES THROUGH ATTRIBUTE CONDITIONING

    公开(公告)号:US20250117970A1

    公开(公告)日:2025-04-10

    申请号:US18637654

    申请日:2024-04-17

    Applicant: ADOBE INC.

    Abstract: A method, apparatus, non-transitory computer readable medium, and system for image generation include obtaining a text prompt and a conditioning attribute. The text prompt is encoded to obtain a text embedding. The conditioning attribute is encoded to obtain an attribute embedding. Then a synthesized image is generated using an image generation model based on the text embedding and the attribute embedding. The synthesized image has the conditioning attribute and depicts an element of the text prompt.

    SELECTIVELY CONDITIONING LAYERS OF A NEURAL NETWORK WITH STYLIZATION PROMPTS FOR DIGITAL IMAGE GENERATION

    公开(公告)号:US20250077842A1

    公开(公告)日:2025-03-06

    申请号:US18459186

    申请日:2023-08-31

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for selectively conditioning layers of a neural network and utilizing the neural network to generate a digital image. In particular, in some embodiments, the disclosed systems condition an upsampling layer of a neural network with an image vector representation of an image prompt. Additionally, in some embodiments, the disclosed systems condition an additional upsampling layer of the neural network with a text vector representation of a text prompt without the image vector representation of the image prompt. Moreover, in some embodiments, the disclosed systems generate, utilizing the neural network, a digital image from the image vector representation and the text vector representation.

    SEMANTIC MIXING AND STYLE TRANSFER UTILIZING A COMPOSABLE DIFFUSION NEURAL NETWORK

    公开(公告)号:US20250095114A1

    公开(公告)日:2025-03-20

    申请号:US18470240

    申请日:2023-09-19

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating digital images by conditioning a diffusion neural network with input prompts. In particular, in one or more embodiments, the disclosed systems generate, utilizing a reverse diffusion model, an image noise representation from a first image prompt. Additionally, in some embodiments, the disclosed systems generate, utilizing a diffusion neural network conditioned with a first vector representation of the first image prompt, a first denoised image representation from the image noise representation. Moreover, in some embodiments, the disclosed systems generate, utilizing the diffusion neural network conditioned with a second vector representation of a second image prompt, a second denoised image representation from the image noise representation. Furthermore, in some embodiments, the disclosed systems combine the first denoised image representation and the second denoised image representation to generate a digital image.

    UTILIZING A DIFFUSION PRIOR NEURAL NETWORK FOR TEXT GUIDED DIGITAL IMAGE EDITING

    公开(公告)号:US20240362842A1

    公开(公告)日:2024-10-31

    申请号:US18308017

    申请日:2023-04-27

    Applicant: Adobe Inc.

    CPC classification number: G06T11/60 G06T5/70 G06T2200/24 G06T2207/20084

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for utilizing a diffusion prior neural network for text guided digital image editing. For example, in one or more embodiments the disclosed systems utilize a text-image encoder to generate a base image embedding from the base digital image and an edit text embedding from edit text. Moreover, the disclosed systems utilize a diffusion prior neural network to generate a text-image embedding. In particular, the disclosed systems inject the base image embedding at a conceptual editing step of the diffusion prior neural network and condition a set of steps of the diffusion prior neural network after the conceptual editing step utilizing the edit text embedding. Furthermore, the disclosed systems utilize a diffusion neural network to create a modified digital image from the text-edited image embedding and the base image embedding.

    STYLE-BASED IMAGE GENERATION
    8.
    发明申请

    公开(公告)号:US20250117973A1

    公开(公告)日:2025-04-10

    申请号:US18903151

    申请日:2024-10-01

    Applicant: ADOBE INC.

    Abstract: A method, apparatus, non-transitory computer readable medium, and system for media processing includes obtaining a text prompt and a style input, where the text prompt describes image content and the style input describes an image style, generating a text embedding based on the text prompt, where the text embedding represents the image content, generating a style embedding based on the style input, where the style embedding represents the image style, and generating a synthetic image based on the text embedding and the style embedding, where the text embedding is provided to the image generation model at a first step and the style embedding is provided to the image generation model at a second step after the first step.

    ABSTRACT BACKGROUND GENERATION
    9.
    发明申请

    公开(公告)号:US20240371048A1

    公开(公告)日:2024-11-07

    申请号:US18312246

    申请日:2023-05-04

    Applicant: ADOBE INC.

    Abstract: Systems and methods for generating abstract backgrounds are described. Embodiments are configured to obtain an input prompt, encode the input prompt to obtain a prompt embedding, and generate a latent vector based on the prompt embedding and a noise vector. Embodiments include a multimodal encoder configured to generate the prompt embedding, which is an intermediate representation the prompt. In some cases, the prompt includes or indicates an “abstract background” type image. The latent vector is generated using a mapping network of a generative adversarial network (GAN). Embodiments are further configured to generate an image based on the latent vector using the GAN.

Patent Agency Ranking