Advanced systems and methods for automatically generating an animatable object from various types of user input
Abstract:
Dynamically customized animatable 3D models of virtual characters (“avatars”) are generated in real time from multiple inputs from one or more devices having various sensors. Each input may comprise a point cloud associated with a user's face/head. An example method comprises receiving inputs from sensor data from multiple sensors of the device(s) in real time, and pre-processing the inputs for determining orientation of the point clouds. The method may include registering the point clouds to align them to a common reference; automatically detecting features of the point clouds; deforming a template geometry based on the features to automatically generate a custom geometry; determining a texture of the inputs and transferring the texture to the custom geometry; deforming a template control structure based on the features to automatically generate a custom control structure; and generating an animatable object having the custom geometry, the transferred texture, and the custom control structure.
Information query
Patent Agency Ranking
0/0