FACE REENACTMENT
    21.
    发明公开
    FACE REENACTMENT 审中-公开

    公开(公告)号:US20240078838A1

    公开(公告)日:2024-03-07

    申请号:US18509502

    申请日:2023-11-15

    Applicant: Snap Inc.

    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving a target video that includes at least one target frame, where the at least one target frame includes a target face, receiving a scenario including a series of source facial expressions, determining, based on the target face, a target facial expression of the target face, synthesizing, based on a parametric face model and a texture model, an output face including the target face, where the target facial expression of the target face is modified to imitate a source facial expression of the series of source facial expressions, and generating, based on the output face, a frame of an output video. The parametric face model includes a template mesh pre-generated based on historical images of faces of a plurality of individuals, where the template mesh includes a pre-determined number of vertices.

    Realistic head turns and face animation synthesis on mobile device

    公开(公告)号:US11915355B2

    公开(公告)日:2024-02-27

    申请号:US17881947

    申请日:2022-08-05

    Applicant: Snap Inc.

    Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method includes receiving a source frame of a source video, where the source frame includes a head and a face of a source actor, generating source pose parameters corresponding to a pose of the head and a facial expression of the source actor; receiving a target image including a target head and a target face of a target person, determining target identity information associated with the target head and the target face of the target person, replacing source identity information in the source pose parameters with the target identity information to obtain further source pose parameters, and generating an output frame of an output video that includes a modified image of the target face and the target head adopting the pose of the head and the facial expression of the source actor.

    PHOTOREALISTIC REAL-TIME PORTRAIT ANIMATION

    公开(公告)号:US20230110916A1

    公开(公告)日:2023-04-13

    申请号:US18080779

    申请日:2022-12-14

    Applicant: Snap Inc.

    Abstract: Provided are systems and methods for portrait animation. An example method includes receiving, by a computing device, a scenario video, where the scenario video includes information concerning a first face, receiving, by the computing device, a target image, where the target image includes a second face, determining, by the computing device and based on the target image and the information concerning the first face, two-dimensional (2D) deformations of the second face in the target image, and applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.

    Face reenactment
    24.
    发明授权

    公开(公告)号:US11410457B2

    公开(公告)日:2022-08-09

    申请号:US17034029

    申请日:2020-09-28

    Applicant: Snap Inc.

    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.

    Photorealistic real-time portrait animation

    公开(公告)号:US11049310B2

    公开(公告)日:2021-06-29

    申请号:US16251472

    申请日:2019-01-18

    Applicant: SNAP INC.

    Abstract: Provided are systems and methods for photorealistic real-time portrait animation. An example method includes receiving a scenario video with at least one input frame. The input frame includes a first face. The method further includes receiving a target image with a second face. The method further includes determining, based on the at least one input frame and the target image, two-dimensional (2D) deformations, wherein the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression and a head orientation of the first face. The method further includes applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.

    SYSTEMS AND METHODS FOR PHOTOREALISTIC REAL-TIME PORTRAIT ANIMATION

    公开(公告)号:US20200234482A1

    公开(公告)日:2020-07-23

    申请号:US16251472

    申请日:2019-01-18

    Applicant: SNAP INC.

    Abstract: Provided are systems and methods for photorealistic real-time portrait animation. An example method includes receiving a scenario video with at least one input frame. The input frame includes a first face. The method further includes receiving a target image with a second face. The method further includes determining, based on the at least one input frame and the target image, two-dimensional (2D) deformations, wherein the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression and a head orientation of the first face. The method further includes applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.

    Prompt modification for automated image generation

    公开(公告)号:US12169626B2

    公开(公告)日:2024-12-17

    申请号:US18116003

    申请日:2023-03-01

    Applicant: Snap Inc.

    Abstract: Examples disclosed herein describe prompt modification techniques for automated image generation. An image generation request comprising a base prompt is received from a user device. A plurality of prompt modifiers is identified. A processor-implemented scoring engine determines, for each prompt modifier, a modifier score. The modifier score for each prompt modifier is associated with the base prompt. One or more of the prompt modifiers are automatically selected based on the modifier scores. A modified prompt is generated. The modified prompt is based on the base prompt and the one or more selected prompt modifiers. The modified prompt is provided as input to an automated image generator to generate an image, and the image is caused to be presented on the user device.

    Face animation synthesis
    28.
    发明授权

    公开(公告)号:US12125147B2

    公开(公告)日:2024-10-22

    申请号:US17663176

    申请日:2022-05-12

    Applicant: Snap Inc.

    Abstract: A methodology for training a machine learning model to generate color-neutral input face images is described. For each training face image from a training dataset that is used for training the model, the training system generates an input face image, which has the color and lighting of a randomly selected image from the set of color source images, and which has facial features and expression of a face object from the training face image. Because, during training, the machine learning model is “confused” by changing the color and lighting of a training face image to a randomly selected different color and lighting, the trained machine learning model generates a color neutral embedding representing facial features from the training face image.

    GENERATING VIRTUAL HAIRSTYLE USING LATENT SPACE PROJECTORS

    公开(公告)号:US20240221259A1

    公开(公告)日:2024-07-04

    申请号:US18149007

    申请日:2022-12-30

    Applicant: Snap Inc.

    CPC classification number: G06T13/40 G06N3/094 G06T19/006

    Abstract: The subject technology generates a first image of a face using a GAN model. The subject technology applies 3D virtual hair on the first image to generate a second image with 3D virtual hair. The subject technology projects the second image with 3D virtual hair into a GAN latent space to generate a third image with realistic virtual hair. The subject technology performs a blend of the realistic virtual hair with the first image of the face to generate a new image with new realistic hair that corresponds to the 3D virtual hair. The subject technology trains a neural network that receives the second image with the 3D virtual hair and provides an output image with realistic virtual hair. The subject technology generates using the trained neural network, a particular output image with realistic hair based on a particular input image with 3D virtual hair.

Patent Agency Ranking