END-TO-END VIRTUAL HUMAN SPEECH AND MOVEMENT SYNTHESIZATION
Abstract:
Synthesizing speech and movement of a virtual human includes capturing supplemental data generated by a transducer. The supplemental data specifies one or more attributes of a user. The capturing is performed in substantially real-time with the user providing input to a conversational platform. A behavior determiner generates behavioral data based on the supplemental data and an audio response generated by the conversational platform in response to the input to the conversation platform. Based on the behavioral data and the audio response, a rendering network generates a video rendering of a virtual human engaging in a conversation with the user, the video rendering synchronized with the audio response.
Information query
Patent Agency Ranking
0/0