- Patent Title: Style-aware audio-driven talking head animation from a single image
-
Application No.: US17887685Application Date: 2022-08-15
-
Publication No.: US11776188B2Publication Date: 2023-10-03
- Inventor: Dingzeyu Li , Yang Zhou , Jose Ignacio Echevarria Vallespi , Elya Shechtman
- Applicant: Adobe Inc.
- Applicant Address: US CA San Jose
- Assignee: ADOBE INC.
- Current Assignee: ADOBE INC.
- Current Assignee Address: US CA San Jose
- Agency: SHOOK, HARDY & BACON L.L.P.
- Main IPC: G06T13/20
- IPC: G06T13/20 ; G06T17/20 ; G06T13/40

Abstract:
Embodiments of the present invention provide systems, methods, and computer storage media for generating an animation of a talking head from an input audio signal of speech and a representation (such as a static image) of a head to animate. Generally, a neural network can learn to predict a set of 3D facial landmarks that can be used to drive the animation. In some embodiments, the neural network can learn to detect different speaking styles in the input speech and account for the different speaking styles when predicting the 3D facial landmarks. Generally, template 3D facial landmarks can be identified or extracted from the input image or other representation of the head, and the template 3D facial landmarks can be used with successive windows of audio from the input speech to predict 3D facial landmarks and generate a corresponding animation with plausible 3D effects.
Public/Granted literature
- US20220392131A1 STYLE-AWARE AUDIO-DRIVEN TALKING HEAD ANIMATION FROM A SINGLE IMAGE Public/Granted day:2022-12-08
Information query