USER-GUIDED IMAGE GENERATION
    94.
    发明公开

    公开(公告)号:US20230274535A1

    公开(公告)日:2023-08-31

    申请号:US17680906

    申请日:2022-02-25

    Applicant: ADOBE INC.

    CPC classification number: G06V10/7747 G06F3/04842

    Abstract: An image generation system enables user input during the process of training a generative model to influence the model's ability to generate new images with desired visual features. A source generative model for a source domain is fine-tuned using training images in a target domain to provide an adapted generative model for the target domain. Interpretable factors are determined for the source generative model and the adapted generative model. A user interface is provided that enables a user to select one or more interpretable factors. The user-selected interpretable factor(s) are used to generate a user-adapted generative model, for instance, by using a loss function based on the user-selected interpretable factor(s). The user-adapted generative model can be used to create new images in the target domain.

    GENERATING SYNTHESIZED DIGITAL IMAGES UTILIZING CLASS-SPECIFIC MACHINE-LEARNING MODELS

    公开(公告)号:US20230051749A1

    公开(公告)日:2023-02-16

    申请号:US17400474

    申请日:2021-08-12

    Applicant: Adobe Inc.

    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate synthetized digital images using class-specific generators for objects of different classes. The disclosed system modifies a synthesized digital image by utilizing a plurality of class-specific generator neural networks to generate a plurality of synthesized objects according to object classes identified in the synthesized digital image. The disclosed system determines object classes in the synthesized digital image such as via a semantic label map corresponding to the synthesized digital image. The disclosed system selects class-specific generator neural networks corresponding to the classes of objects in the synthesized digital image. The disclosed system also generates a plurality of synthesized objects utilizing the class-specific generator neural networks based on contextual data associated with the identified objects. The disclosed system generates a modified synthesized digital image by replacing the identified objects in the synthesized digital images with the synthesized objects.

    GENERATING MODIFIED DIGITAL IMAGES UTILIZING NEAREST NEIGHBOR FIELDS FROM PATCH MATCHING OPERATIONS OF ALTERNATE DIGITAL IMAGES

    公开(公告)号:US20220398712A1

    公开(公告)日:2022-12-15

    申请号:US17820649

    申请日:2022-08-18

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified digital images by utilizing a patch match algorithm to generate nearest neighbor fields for a second digital image based on a nearest neighbor field associated with a first digital image. For example, the disclosed systems can identify a nearest neighbor field associated with a first digital image of a first resolution. Based on the nearest neighbor field of the first digital image, the disclosed systems can utilize a patch match algorithm to generate a nearest neighbor field for a second digital image of a second resolution larger than the first resolution. The disclosed systems can further generate a modified digital image by filling a target region of the second digital image utilizing the generated nearest neighbor field.

    GENERATING MODIFIED DIGITAL IMAGES USING DEEP VISUAL GUIDED PATCH MATCH MODELS FOR IMAGE INPAINTING

    公开(公告)号:US20220292650A1

    公开(公告)日:2022-09-15

    申请号:US17202019

    申请日:2021-03-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.

    Style-aware audio-driven talking head animation from a single image

    公开(公告)号:US11417041B2

    公开(公告)日:2022-08-16

    申请号:US16788551

    申请日:2020-02-12

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for generating an animation of a talking head from an input audio signal of speech and a representation (such as a static image) of a head to animate. Generally, a neural network can learn to predict a set of 3D facial landmarks that can be used to drive the animation. In some embodiments, the neural network can learn to detect different speaking styles in the input speech and account for the different speaking styles when predicting the 3D facial landmarks. Generally, template 3D facial landmarks can be identified or extracted from the input image or other representation of the head, and the template 3D facial landmarks can be used with successive windows of audio from the input speech to predict 3D facial landmarks and generate a corresponding animation with plausible 3D effects.

    IMAGE INPAINTING WITH GEOMETRIC AND PHOTOMETRIC TRANSFORMATIONS

    公开(公告)号:US20220172331A1

    公开(公告)日:2022-06-02

    申请号:US17651435

    申请日:2022-02-17

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for filling or otherwise replacing a target region of a primary image with a corresponding region of an auxiliary image. The filling or replacing can be done with an overlay (no subtractive process need be run on the primary image). Because the primary and auxiliary images may not be aligned, both geometric and photometric transformations are applied to the primary and/or auxiliary images. For instance, a geometric transformation of the auxiliary image is performed, to better align features of the auxiliary image with corresponding features of the primary image. Also, a photometric transformation of the auxiliary image is performed, to better match color of one or more pixels of the auxiliary image with color of corresponding one or more pixels of the primary image. The corresponding region of the transformed auxiliary image is then copied and overlaid on the target region of the primary image.

Patent Agency Ranking