FACE ANIMATION SYNTHESIS
    11.
    发明申请

    公开(公告)号:US20220172438A1

    公开(公告)日:2022-06-02

    申请号:US17107410

    申请日:2020-11-30

    Applicant: Snap Inc.

    Abstract: In some embodiments, users' experience of engaging with augmented reality technology is enhanced by providing a process, referred to as face animation synthesis, that replaces an actor's face in the frames of a video with a user's face from the user's portrait image. The resulting face in the frames of the video retains the facial expressions, as well as color and lighting, of the actor's face but, at the same time, has the likeness of the user's face. An example face animation synthesis experience can be made available to uses of a messaging system by providing a face animation synthesis augmented reality component.

    PHOTOREALISTIC REAL-TIME PORTRAIT ANIMATION
    12.
    发明公开

    公开(公告)号:US20240296614A1

    公开(公告)日:2024-09-05

    申请号:US18641472

    申请日:2024-04-22

    Applicant: Snap Inc.

    CPC classification number: G06T13/80 G06T7/174 G06V40/167 G06T2207/20084

    Abstract: Provided are systems and methods for portrait animation. An example method includes receiving, by a computing device, scenario data including information concerning movements of a first head, receiving, by the computing device, a target image including a second head and a background, determining, by the computing device and based on the target image and the information concerning the movements of the first head, two-dimensional (2D) deformations of the second head in the target image, applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video, the at least one output frame including the second head displaced according to the movements of the first head, and filling, by the computing device and using a background prediction neural network, a portion of the background in gaps between the displaced second head and the background.

    Face reenactment
    13.
    发明授权

    公开(公告)号:US11861936B2

    公开(公告)日:2024-01-02

    申请号:US17869794

    申请日:2022-07-21

    Applicant: Snap Inc.

    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.

    Photorealistic real-time portrait animation

    公开(公告)号:US11568589B2

    公开(公告)日:2023-01-31

    申请号:US17751796

    申请日:2022-05-24

    Applicant: Snap Inc.

    Abstract: Disclosed are systems and methods for portrait animation. An example method includes receiving, by a computing device, a scenario video, where the scenario video includes at least one input frame and the at least one input frame includes a first face, receiving, by the computing device, a target image, where the target image includes a second face, determining, by the computing device and based on the at least one input frame and the target image, two-dimensional (2D) deformations of the second face in the target image, where the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression of the first face, and applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.

    FACE REENACTMENT
    15.
    发明申请

    公开(公告)号:US20220358784A1

    公开(公告)日:2022-11-10

    申请号:US17869794

    申请日:2022-07-21

    Applicant: Snap Inc.

    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.

    Text and audio-based real-time face reenactment

    公开(公告)号:US11114086B2

    公开(公告)日:2021-09-07

    申请号:US16509370

    申请日:2019-07-11

    Applicant: SNAP INC.

    Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; determining, based on the sequence of sets of acoustic features, a sequence of sets of scenario data indicating modifications of the target face for pronouncing the input text; generating, based on the sequence of sets of scenario data, a sequence of frames, wherein each of the frames includes the target face modified based on at least one of the sets of scenario data; generating, based on the sequence of frames, an output video; and synthesizing, based on the sequence of sets of acoustic features, an audio data and adding the audio data to the output video.

    TEXT AND AUDIO-BASED REAL-TIME FACE REENACTMENT

    公开(公告)号:US20200234690A1

    公开(公告)日:2020-07-23

    申请号:US16509370

    申请日:2019-07-11

    Applicant: SNAP INC.

    Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; determining, based on the sequence of sets of acoustic features, a sequence of sets of scenario data indicating modifications of the target face for pronouncing the input text; generating, based on the sequence of sets of scenario data, a sequence of frames, wherein each of the frames includes the target face modified based on at least one of the sets of scenario data; generating, based on the sequence of frames, an output video; and synthesizing, based on the sequence of sets of acoustic features, an audio data and adding the audio data to the output video.

    PHOTO-REALISTIC TEMPORALLY STABLE HAIRSTYLE CHANGE IN REAL-TIME

    公开(公告)号:US20250022264A1

    公开(公告)日:2025-01-16

    申请号:US18221241

    申请日:2023-07-12

    Applicant: Snap Inc.

    Abstract: The subject technology trains a neural network based on a training process. The subject technology selects a frame from an input video, the selected frame comprising image data including a representation of a face and hair, the representation of the hair being masked. The subject technology determines a previous predicted frame. The subject technology concatenates the selected frame and the previous predicted frame to generate a concatenated frame, the concatenated frame being provided to the neural network. The subject technology generates, using the neural network, a set of outputs including an output tensor, warping field, and a soft mask. The subject technology performs, using a warping field, a warp of the selected frame and the output tensor. The subject technology generates a prediction corresponding to a corrected texture rendering of the selected frame.

Patent Agency Ranking