USER-GUIDED IMAGE GENERATION
    2.
    发明公开

    公开(公告)号:US20230274535A1

    公开(公告)日:2023-08-31

    申请号:US17680906

    申请日:2022-02-25

    Applicant: ADOBE INC.

    CPC classification number: G06V10/7747 G06F3/04842

    Abstract: An image generation system enables user input during the process of training a generative model to influence the model's ability to generate new images with desired visual features. A source generative model for a source domain is fine-tuned using training images in a target domain to provide an adapted generative model for the target domain. Interpretable factors are determined for the source generative model and the adapted generative model. A user interface is provided that enables a user to select one or more interpretable factors. The user-selected interpretable factor(s) are used to generate a user-adapted generative model, for instance, by using a loss function based on the user-selected interpretable factor(s). The user-adapted generative model can be used to create new images in the target domain.

    GENERATING COLLAGE DIGITAL IMAGES BY COMBINING SCENE LAYOUTS AND PIXEL COLORS UTILIZING GENERATIVE NEURAL NETWORKS

    公开(公告)号:US20230260175A1

    公开(公告)日:2023-08-17

    申请号:US17650957

    申请日:2022-02-14

    Applicant: Adobe Inc.

    CPC classification number: G06T11/60 G06T7/90 G06T2207/20084 G06T2207/20212

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating digital images depicting photorealistic scenes utilizing a digital image collaging neural network. For example, the disclosed systems utilize a digital image collaging neural network having a particular architecture for disentangling generation of scene layouts and pixel colors for different regions of a digital image. In some cases, the disclosed systems break down the process of generating a collage digital into generating images representing different regions such as a background and a foreground to be collaged into a final result. For example, utilizing the digital image collaging neural network, the disclosed systems determine scene layouts and pixel colors for both foreground digital images and background digital images to ultimately collage the foreground and background together into a collage digital image depicting a real-world scene.

    Generating modified digital images incorporating scene layout utilizing a swapping autoencoder

    公开(公告)号:US11625875B2

    公开(公告)日:2023-04-11

    申请号:US17091416

    申请日:2020-11-06

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating modified digital images utilizing a novel swapping autoencoder that incorporates scene layout. In particular, the disclosed systems can receive a scene layout map that indicates or defines locations for displaying specific digital content within a digital image. In addition, the disclosed systems can utilize the scene layout map to guide combining portions of digital image latent code to generate a modified digital image with a particular textural appearance and a particular geometric structure defined by the scene layout map. Additionally, the disclosed systems can utilize a scene layout map that defines a portion of a digital image to modify by, for instance, adding new digital content to the digital image, and can generate a modified digital image depicting the new digital content.

    GENERATING MODIFIED DIGITAL IMAGES INCORPORATING SCENE LAYOUT UTILIZING A SWAPPING AUTOENCODER

    公开(公告)号:US20220148241A1

    公开(公告)日:2022-05-12

    申请号:US17091416

    申请日:2020-11-06

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating modified digital images utilizing a novel swapping autoencoder that incorporates scene layout. In particular, the disclosed systems can receive a scene layout map that indicates or defines locations for displaying specific digital content within a digital image. In addition, the disclosed systems can utilize the scene layout map to guide combining portions of digital image latent code to generate a modified digital image with a particular textural appearance and a particular geometric structure defined by the scene layout map. Additionally, the disclosed systems can utilize a scene layout map that defines a portion of a digital image to modify by, for instance, adding new digital content to the digital image, and can generate a modified digital image depicting the new digital content.

    DATA ATTRIBUTION FOR DIFFUSION MODELS

    公开(公告)号:US20250104399A1

    公开(公告)日:2025-03-27

    申请号:US18473603

    申请日:2023-09-25

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present disclosure perform training attribution by identifying a synthesized image and a training image, where the synthesized image was generated by an image generation model that was trained with the training image. A machine learning model computes first attribution features for the synthesized image using a first mapping layer and second attribution features for the training image using a second mapping layer that is different from the first mapping layer. Then, an attribution score is generated based on the first attribution features and the second attribution features, where the attribution score indicates a degree of influence for the training image on generating the synthesized image.

    FEW-SHOT DIGITAL IMAGE GENERATION USING GAN-TO-GAN TRANSLATION

    公开(公告)号:US20220254071A1

    公开(公告)日:2022-08-11

    申请号:US17163284

    申请日:2021-01-29

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and efficiently modifying a generative adversarial neural network using few-shot adaptation to generate digital images corresponding to a target domain while maintaining diversity of a source domain and realism of the target domain. In particular, the disclosed systems utilize a generative adversarial neural network with parameters learned from a large source domain. The disclosed systems preserve relative similarities and differences between digital images in the source domain using a cross-domain distance consistency loss. In addition, the disclosed systems utilize an anchor-based strategy to encourage different levels or measures of realism over digital images generated from latent vectors in different regions of a latent space.

    User-guided image generation
    9.
    发明授权

    公开(公告)号:US12230014B2

    公开(公告)日:2025-02-18

    申请号:US17680906

    申请日:2022-02-25

    Applicant: ADOBE INC.

    Abstract: An image generation system enables user input during the process of training a generative model to influence the model's ability to generate new images with desired visual features. A source generative model for a source domain is fine-tuned using training images in a target domain to provide an adapted generative model for the target domain. Interpretable factors are determined for the source generative model and the adapted generative model. A user interface is provided that enables a user to select one or more interpretable factors. The user-selected interpretable factor(s) are used to generate a user-adapted generative model, for instance, by using a loss function based on the user-selected interpretable factor(s). The user-adapted generative model can be used to create new images in the target domain.

    Generating collage digital images by combining scene layouts and pixel colors utilizing generative neural networks

    公开(公告)号:US12136151B2

    公开(公告)日:2024-11-05

    申请号:US17650957

    申请日:2022-02-14

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating digital images depicting photorealistic scenes utilizing a digital image collaging neural network. For example, the disclosed systems utilize a digital image collaging neural network having a particular architecture for disentangling generation of scene layouts and pixel colors for different regions of a digital image. In some cases, the disclosed systems break down the process of generating a collage digital into generating images representing different regions such as a background and a foreground to be collaged into a final result. For example, utilizing the digital image collaging neural network, the disclosed systems determine scene layouts and pixel colors for both foreground digital images and background digital images to ultimately collage the foreground and background together into a collage digital image depicting a real-world scene.

Patent Agency Ranking