-
公开(公告)号:US20240054709A1
公开(公告)日:2024-02-15
申请号:US18482634
申请日:2023-10-06
Applicant: Snap Inc.
Inventor: Gurunandan Krishnan Gorumkonda , Hsin-Ying Lee , Jie Xu
CPC classification number: G06T13/205 , G06N3/08 , G06T13/40 , G06T13/80 , G06N3/044 , G06N3/045 , G10H2210/031
Abstract: Example methods for generating an animated character in dance poses to music may include generating, by at least one processor, a music input signal based on an acoustic signal associated with the music, and receiving, by the at least one processor, a model output signal from an encoding neural network. A current generated pose data is generated using a decoding neural network, the current generated pose data being based on previous generated pose data of a previous generated pose, the music input signal, and the model output signal. An animated character is generated based on a current generated pose data; and the animated character caused to be displayed by a display device.
-
公开(公告)号:US20240029346A1
公开(公告)日:2024-01-25
申请号:US17814063
申请日:2022-07-21
Applicant: Snap Inc.
Inventor: Zeng Huang , Menglei Chai , Sergey Tulyakov , Kyle Olszewski , Hsin-Ying Lee
CPC classification number: G06T17/00 , G06T15/04 , G06T2207/10028
Abstract: A system to enable 3D hair reconstruction and rendering from a single reference image which performs a multi-stage process that utilizes both a 3D implicit representation and a 2D parametric embedding space.
-
公开(公告)号:US11816773B2
公开(公告)日:2023-11-14
申请号:US17487558
申请日:2021-09-28
Applicant: Snap Inc.
Inventor: Gurunandan Krishnan Gorumkonda , Hsin-Ying Lee , Jie Xu
CPC classification number: G06T13/205 , G06N3/044 , G06N3/045 , G06N3/08 , G06T13/40 , G06T13/80 , G10H2210/031
Abstract: Example methods for generating an animated character in dance poses to music may include generating, by at least one processor, a music input signal based on an acoustic signal associated with the music, and receiving, by the at least one processor, a model output signal from an encoding neural network. A current generated pose data is generated using a decoding neural network, the current generated pose data being based on previous generated pose data of a previous generated pose, the music input signal, and the model output signal. An animated character is generated based on a current generated pose data; and the animated character caused to be displayed by a display device.
-
公开(公告)号:US20230386158A1
公开(公告)日:2023-11-30
申请号:US17814391
申请日:2022-07-22
Applicant: Snap Inc.
Inventor: Menglei Chai , Sergey Tulyakov , Jian Ren , Hsin-Ying Lee , Kyle Olszewski , Zeng Huang , Zezhou Cheng
CPC classification number: G06T19/20 , G06T17/00 , G06T2219/2012 , G06T2219/2021
Abstract: Systems, computer readable media, and methods herein describe an editing system where a three-dimensional (3D) object can be edited by editing a 2D sketch or 2D RGB views of the 3D object. The editing system uses multi-modal (MM) variational auto-decoders (VADs)(MM-VADs) that are trained with a shared latent space that enables editing 3D objects by editing 2D sketches of the 3D objects. The system determines a latent code that corresponds to an edited or sketched 2D sketch. The latent code is then used to generate a 3D object using the MM-VADs with the latent code as input. The latent space is divided into a latent space for shapes and a latent space for colors. The MM-VADs are trained with variational auto-encoders (VAE) and a ground truth.
-
公开(公告)号:US12094073B2
公开(公告)日:2024-09-17
申请号:US17814391
申请日:2022-07-22
Applicant: Snap Inc.
Inventor: Menglei Chai , Sergey Tulyakov , Jian Ren , Hsin-Ying Lee , Kyle Olszewski , Zeng Huang , Zezhou Cheng
CPC classification number: G06T19/20 , G06T17/00 , G06T2219/2012 , G06T2219/2021
Abstract: Systems, computer readable media, and methods herein describe an editing system where a three-dimensional (3D) object can be edited by editing a 2D sketch or 2D RGB views of the 3D object. The editing system uses multi-modal (MM) variational auto-decoders (VADs)(MM-VADs) that are trained with a shared latent space that enables editing 3D objects by editing 2D sketches of the 3D objects. The system determines a latent code that corresponds to an edited or sketched 2D sketch. The latent code is then used to generate a 3D object using the MM-VADs with the latent code as input. The latent space is divided into a latent space for shapes and a latent space for colors. The MM-VADs are trained with variational auto-encoders (VAE) and a ground truth.
-
-
-
-