IMAGE GENERATION WITH ADJUSTABLE COMPLEXITY

    公开(公告)号:US20250095226A1

    公开(公告)日:2025-03-20

    申请号:US18884787

    申请日:2024-09-13

    Applicant: ADOBE INC.

    Abstract: A method, apparatus, non-transitory computer readable medium, and system for generating images with an adjustable level of complexity includes obtaining a content prompt, a style prompt, and a complexity value. The content prompt describes an image element, the style prompt indicates an image style, and the complexity value indicates a level of influence of the style prompt. Embodiments then generate, using an image generation model, an output image based on the content prompt, the style prompt, and the complexity value, wherein the output image includes the image element with a level of the image style based on the complexity value.

    TEXT-GUIDED VECTOR IMAGE SYNTHESIS

    公开(公告)号:US20250095227A1

    公开(公告)日:2025-03-20

    申请号:US18886452

    申请日:2024-09-16

    Applicant: ADOBE INC.

    Abstract: A method, apparatus, non-transitory computer readable medium, and system for training a text-guided vector image synthesis include obtaining training data including a vectorizable image and a caption describing the vectorizable image and generating, using an image generation model, a predicted image with a first level of high frequency detail. Then, the training data and the predicted image are used to tune the image generation model to generate a synthetic vectorizable image based on the caption, where the synthetic vectorizable image has a second level of high frequency detail that is lower than the first level of high frequency detail of the predicted image.

    DESIGN COMPOSITING USING IMAGE HARMONIZATION

    公开(公告)号:US20240420394A1

    公开(公告)日:2024-12-19

    申请号:US18334610

    申请日:2023-06-14

    Applicant: ADOBE INC.

    Abstract: Systems and methods are provided for image editing, and more particularly, for harmonizing background images with text. Embodiments of the present disclosure obtain an image including text and a region overlapping the text. In some aspects, the text includes a first color. Embodiments then select a second color that contrasts with the first color, and generate a modified image including the text and a modified region using a machine learning model that takes the image and the second color as input. The modified image is generated conditionally, so as to include the second color in a region corresponding to the text.

    EVALUATING BIAS IN GENERATIVE MODELS

    公开(公告)号:US20240386707A1

    公开(公告)日:2024-11-21

    申请号:US18319731

    申请日:2023-05-18

    Applicant: Adobe Inc.

    Abstract: In implementations of systems for evaluating bias in generative models, a computing device implements a bias system to generate a modified digital image by processing an input digital image using a first machine learning model trained on training data to generate modified digital images based on input digital images. The bias system computes a first latent representation of the input digital image and a second latent representation of the modified digital image using a second machine learning model trained on training data to compute latent representations of digital images. A bias score is determined for a visual attribute based on the first latent representation and the second latent representation. The bias system generates an indication of the bias score for the visual attribute for display in a user interface.

    GENERATING ARTISTIC CONTENT FROM A TEXT PROMPT OR A STYLE IMAGE UTILIZING A NEURAL NETWORK MODEL

    公开(公告)号:US20230267652A1

    公开(公告)日:2023-08-24

    申请号:US17652390

    申请日:2022-02-24

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize an iterative neural network framework for generating artistic visual content. For instance, in one or more embodiments, the disclosed systems receive style parameters in the form a style image and/or a text prompt. In some cases, the disclosed systems further receive a content image having content to include in the artistic visual content. Accordingly, in one or more embodiments, the disclosed systems utilize a neural network to generate the artistic visual content by iteratively generating an image, comparing the image to the style parameters, and updating parameters for generating the next image based on the comparison. In some instances, the disclosed systems incorporate a superzoom network into the neural network for increasing the resolution of the final image and adding art details that are associated with a physical art medium (e.g., brush strokes).

    GENERATING GRAPHIC DESIGNS BY EXPLOITING CONTRAST THROUGH GENERATIVE EDITING

    公开(公告)号:US20250148670A1

    公开(公告)日:2025-05-08

    申请号:US18502778

    申请日:2023-11-06

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating digital designs utilizing a diffusion neural network to preserve readability and design composition while modifying image content background images and design assets. In some embodiments, the disclosed systems access a text prompt defining visual attributes of a digital design. Furthermore, the disclosed systems generate a modified text prompt by replacing chromatic information within the text prompt. Additionally, the disclosed systems determine an adaptive strength for a diffusion neural network from the text prompt. Also, the disclosed systems generate a modified digital design utilizing the diffusion neural network to process the modified text prompt according to the adaptive strength.

    IN-CONTEXT IMAGE GENERATION USING STYLE IMAGES

    公开(公告)号:US20250095256A1

    公开(公告)日:2025-03-20

    申请号:US18890203

    申请日:2024-09-19

    Applicant: ADOBE INC.

    Abstract: A method, apparatus, non-transitory computer readable medium, and system for generating images with a particular style that fit coherently into a scene includes obtaining a text prompt and a preliminary style image. The text prompt describes an image element, and the preliminary style image includes a region with a target style. Embodiments then extract the region with the target style from the preliminary style image to obtain a style image. Embodiments subsequently generate, using an image generation model, a synthetic image based on the text prompt and the style image. The synthetic image depicts the image element with the target style.

Patent Agency Ranking