-
公开(公告)号:US20250131631A1
公开(公告)日:2025-04-24
申请号:US18914741
申请日:2024-10-14
Applicant: Meta Platforms Technologies, LLC
Inventor: Alexander Richard , Michael Zollhoefer , Fernando De la Torre , Yaser Sheikh
Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.
-
公开(公告)号:US20240177419A1
公开(公告)日:2024-05-30
申请号:US18522763
申请日:2023-11-29
Applicant: Meta Platforms Technologies, LLC
Inventor: Shunsuke Saito , Junxuan Li , Tomas Simon Kreuz , Jason Saragih , Shun Iwase , Timur Bagautdinov , Rohan Joshi , Fabian Andres Prada Nino , Takaaki Shiratori , Yaser Sheikh , Stephen Anthony Lombardi
CPC classification number: G06T17/10 , G06V10/40 , G06T2207/30201
Abstract: Methods, systems, and storage media for modeling subjects in a virtual environment are disclosed. Exemplary implementations may: receiving, from a client device, image data including at least one subject; extracting, from the image data, a face of the at least one subject and an object interacting with the face, wherein the object may be glasses worn by the subject; generating a set of face primitives based on the face, the set of face primitives comprising geometry and appearance information; generating a set of object primitives based on a set of latent codes for the object; generating an appearance model of photometric interactions between the face and the object; and rendering an avatar in the virtual environment based on the appearance model, the set of face primitives, and the set of object primitives.
-
公开(公告)号:US11989846B2
公开(公告)日:2024-05-21
申请号:US17554992
申请日:2021-12-17
Applicant: Meta Platforms Technologies, LLC
Inventor: Stephen Anthony Lombardi , Tomas Simon Kreuz , Jason Saragih , Gabriel Bailowitz Schwartz , Michael Zollhoefer , Yaser Sheikh
CPC classification number: G06T19/20 , G06T15/06 , G06T17/20 , G06T2219/2012
Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
-
公开(公告)号:US12190428B2
公开(公告)日:2025-01-07
申请号:US18333647
申请日:2023-06-13
Applicant: Meta Platforms Technologies, LLC
Inventor: Jason Saragih , Stephen Anthony Lombardi , Shunsuke Saito , Tomas Simon Kreuz , Shih-En Wei , Kevyn Alex Anthony McPhail , Yaser Sheikh , Sai Bi
Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.
-
公开(公告)号:US12159339B2
公开(公告)日:2024-12-03
申请号:US18462310
申请日:2023-09-06
Applicant: Meta Platforms Technologies, LLC
Inventor: Alexander Richard , Michael Zollhoefer , Fernando De la Torre , Yaser Sheikh
Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.
-
公开(公告)号:US11734888B2
公开(公告)日:2023-08-22
申请号:US17396534
申请日:2021-08-06
Applicant: Meta Platforms Technologies, LLC
Inventor: Chen Cao , Vasu Agrawal , Fernando De la Torre , Lele Chen , Jason Saragih , Tomas Simon Kreuz , Yaser Sheikh
CPC classification number: G06T17/20 , G06N3/045 , G06T7/586 , G06T7/73 , G06T13/40 , G06T15/04 , G06V40/165 , G06V40/176
Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
-
公开(公告)号:US20230245365A1
公开(公告)日:2023-08-03
申请号:US18074346
申请日:2022-12-02
Applicant: Meta Platforms Technologies, LLC
Inventor: Chen Cao , Stuart Anderson , Tomas Simon Kreuz , Jin Kyu Kim , Gabriel Bailowitz Schwartz , Michael Zollhoefer , Shunsuke Saito , Stephen Anthony Lombardi , Shih-En Wei , Danielle Belko , Shoou-I Yu , Yaser Sheikh , Jason Saragih
IPC: G06T13/40
CPC classification number: G06T13/40
Abstract: A method for generating a subject avatar using a mobile phone scan is provided. The method includes receiving, from a mobile device, multiple images of a first subject, extracting multiple image features from the images of the first subject based on a set of learnable weights, inferring a three-dimensional model of the first subject from the image features and an existing three-dimensional model of a second subject, animating the three-dimensional model of the first subject based on an immersive reality application running on a headset used by a viewer, and providing, to a display on the headset, an image of the three-dimensional model of the first subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method, are also provided.
-
公开(公告)号:US20220358719A1
公开(公告)日:2022-11-10
申请号:US17396534
申请日:2021-08-06
Applicant: Meta Platforms Technologies, LLC
Inventor: Chen Cao , Vasu Agrawal , Fernando De la Torre , Lele Chen , Jason Saragih , Tomas Simon Kreuz , Yaser Sheikh
Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
-
公开(公告)号:US20230419579A1
公开(公告)日:2023-12-28
申请号:US18462310
申请日:2023-09-06
Applicant: Meta Platforms Technologies, LLC
Inventor: Alexander Richard , Michael Zollhoefer , Fernando De la Torre , Yaser Sheikh
CPC classification number: G06T13/205 , G06T13/40 , G06T17/20 , G06T19/006 , G10L21/14 , G10L2021/105
Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.
-
公开(公告)号:US20230326112A1
公开(公告)日:2023-10-12
申请号:US18333647
申请日:2023-06-13
Applicant: Meta Platforms Technologies, LLC
Inventor: Jason Saragih , Stephen Anthony Lombardi , Shunsuke Saito , Tomas Simon Kreuz , Shih-En Wei , Kevyn Alex Anthony McPhail , Yaser Sheikh , Sai Bi
CPC classification number: G06T13/40 , G06T15/04 , G06T15/60 , G06T15/506 , G06T11/001 , G06T2215/12
Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.
-
-
-
-
-
-
-
-
-