Identity Preserved Controllable Facial Image Manipulation

    公开(公告)号:US20230316591A1

    公开(公告)日:2023-10-05

    申请号:US17709895

    申请日:2022-03-31

    Applicant: Adobe Inc.

    CPC classification number: G06T11/00 G06V10/40 G06V10/7747

    Abstract: Techniques for identity preserved controllable facial image manipulation are described that support generation of a manipulated digital image based on a facial image and a render image. For instance, a facial image depicting a facial representation of an individual is received as input. A feature space including an identity parameter and at least one other visual parameter is extracted from the facial image. An editing module edits one or more of the visual parameters and preserves the identity parameter. A renderer generates a render image depicting a morphable model reconstruction of the facial image based on the edit. The render image and facial image are encoded, and a generator of a neural network is implemented to generate a manipulated digital image based on the encoded facial image and the encoded render image.

    USER-GUIDED IMAGE GENERATION
    22.
    发明公开

    公开(公告)号:US20230274535A1

    公开(公告)日:2023-08-31

    申请号:US17680906

    申请日:2022-02-25

    Applicant: ADOBE INC.

    CPC classification number: G06V10/7747 G06F3/04842

    Abstract: An image generation system enables user input during the process of training a generative model to influence the model's ability to generate new images with desired visual features. A source generative model for a source domain is fine-tuned using training images in a target domain to provide an adapted generative model for the target domain. Interpretable factors are determined for the source generative model and the adapted generative model. A user interface is provided that enables a user to select one or more interpretable factors. The user-selected interpretable factor(s) are used to generate a user-adapted generative model, for instance, by using a loss function based on the user-selected interpretable factor(s). The user-adapted generative model can be used to create new images in the target domain.

    GENERATING ARTISTIC CONTENT FROM A TEXT PROMPT OR A STYLE IMAGE UTILIZING A NEURAL NETWORK MODEL

    公开(公告)号:US20230267652A1

    公开(公告)日:2023-08-24

    申请号:US17652390

    申请日:2022-02-24

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize an iterative neural network framework for generating artistic visual content. For instance, in one or more embodiments, the disclosed systems receive style parameters in the form a style image and/or a text prompt. In some cases, the disclosed systems further receive a content image having content to include in the artistic visual content. Accordingly, in one or more embodiments, the disclosed systems utilize a neural network to generate the artistic visual content by iteratively generating an image, comparing the image to the style parameters, and updating parameters for generating the next image based on the comparison. In some instances, the disclosed systems incorporate a superzoom network into the neural network for increasing the resolution of the final image and adding art details that are associated with a physical art medium (e.g., brush strokes).

    GENERATING SYNTHESIZED DIGITAL IMAGES UTILIZING CLASS-SPECIFIC MACHINE-LEARNING MODELS

    公开(公告)号:US20230051749A1

    公开(公告)日:2023-02-16

    申请号:US17400474

    申请日:2021-08-12

    Applicant: Adobe Inc.

    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate synthetized digital images using class-specific generators for objects of different classes. The disclosed system modifies a synthesized digital image by utilizing a plurality of class-specific generator neural networks to generate a plurality of synthesized objects according to object classes identified in the synthesized digital image. The disclosed system determines object classes in the synthesized digital image such as via a semantic label map corresponding to the synthesized digital image. The disclosed system selects class-specific generator neural networks corresponding to the classes of objects in the synthesized digital image. The disclosed system also generates a plurality of synthesized objects utilizing the class-specific generator neural networks based on contextual data associated with the identified objects. The disclosed system generates a modified synthesized digital image by replacing the identified objects in the synthesized digital images with the synthesized objects.

    AUTOMATIC MAKEUP TRANSFER USING SEMI-SUPERVISED LEARNING

    公开(公告)号:US20210295045A1

    公开(公告)日:2021-09-23

    申请号:US16822878

    申请日:2020-03-18

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, computer-implemented methods, and non-transitory computer readable medium for automatically transferring makeup from a reference face image to a target face image using a neural network trained using semi-supervised learning. For example, the disclosed systems can receive, at a neural network, a target face image and a reference face image, where the target face image is selected by a user via a graphical user interface (GUI) and the reference face image has makeup. The systems transfer, by the neural network, the makeup from the reference face image to the target face image, where the neural network is trained to transfer the makeup from the reference face image to the target face image using semi-supervised learning. The systems output for display the makeup on the target face image.

    Adapting generative neural networks using a cross domain translation network

    公开(公告)号:US12249132B2

    公开(公告)日:2025-03-11

    申请号:US17815451

    申请日:2022-07-27

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for adapting generative neural networks to target domains utilizing an image translation neural network. In particular, in one or more embodiments, the disclosed systems utilize an image translation neural network to translate target results to a source domain for input in target neural network adaptation. For instance, in some embodiments, the disclosed systems compare a translated target result with a source result from a pretrained source generative neural network to adjust parameters of a target generative neural network to produce results corresponding in features to source results and corresponding in style to the target domain.

    Image inversion using multiple latent spaces

    公开(公告)号:US12159413B2

    公开(公告)日:2024-12-03

    申请号:US17693618

    申请日:2022-03-14

    Applicant: Adobe Inc.

    Abstract: In implementations of systems for image inversion using multiple latent spaces, a computing device implements an inversion system to generate a segment map that segments an input digital image into a first image region and a second image region and assigns the first image region to a first latent space and the second image region to a second latent space that corresponds to a layer of a convolutional neural network. An inverted latent representation of the input digital image is computed using a binary mask for the second image region. The inversion system modifies the inverted latent representation of the input digital image using an edit direction vector that corresponds to a visual feature. An output digital image is generated that depicts a reconstruction of the input digital image having the visual feature based on the modified inverted latent representation of the input digital image.

    WAVELET-DRIVEN IMAGE SYNTHESIS WITH DIFFUSION MODELS

    公开(公告)号:US20240169488A1

    公开(公告)日:2024-05-23

    申请号:US18056405

    申请日:2022-11-17

    Applicant: ADOBE INC.

    Abstract: Systems and methods for synthesizing images with increased high-frequency detail are described. Embodiments are configured to identify an input image including a noise level and encode the input image to obtain image features. A diffusion model reduces a resolution of the image features at an intermediate stage of the model using a wavelet transform to obtain reduced image features at a reduced resolution, and generates an output image based on the reduced image features using the diffusion model. In some cases, the output image comprises a version of the input image that has a reduced noise level compared to the noise level of the input image.

Patent Agency Ranking