-
公开(公告)号:WO2022208440A1
公开(公告)日:2022-10-06
申请号:PCT/IB2022/053034
申请日:2022-03-31
Applicant: SONY GROUP CORPORATION , SONY CORPORATION OF AMERICA
Inventor: ZHANG, Qing , XIAO, Hanyuan
IPC: G06V40/10 , G06V10/82 , G06T15/08 , G06T19/20 , G06T13/40 , G06T17/00 , G06T2207/20084 , G06T2207/30196 , G06T2210/16 , G06T7/55 , G06T7/70 , G06V40/103
Abstract: A neural human performance capture framework (MVS-PERF) captures the skeleton,body shape and clothes displacement, and appearance of a person from a set of calibratedmultiview images. It addresses the ambiguity of predicting the absolute position in monocularhuman mesh recovery, and bridges the volumetric representation from NeRF toanimation-friendly performance capture. MVS-PERF includes three modules to extract featuremaps from multiview images and fuse them to a feature volume, regress the feature volume to anaked human parameters vector, generating an SMPL-X skin-tight body mesh with a skeletalpose, body shape, and expression, and leverage a neural radiance field and a deformation field toinfer the clothes as the displacement on the naked body using differentiable rendering. Clothedbody mesh is obtained by adding the interpolated displacement vectors to the SMPL-X skin-tightbody mesh vertices. The obtained radiance field is used for free-view volumetric rendering of theinput subject.
-
公开(公告)号:WO2021202803A1
公开(公告)日:2021-10-07
申请号:PCT/US2021/025263
申请日:2021-03-31
Applicant: SONY GROUP CORPORATION , TASHIRO, Kenji , ZHANG, Qing
Inventor: TASHIRO, Kenji , ZHANG, Qing
IPC: G06T13/20 , G06T7/20 , G06N20/00 , G06K9/00342 , G06K9/00369 , G06T13/40 , G06T17/20
Abstract: Mesh-tracking based dynamic 4D modeling for machine learning deformation training includes: using a volumetric capture system for high-quality 4D scanning, using mesh-tracking to establish temporal correspondences across a 4D scanned human face and full-body mesh sequence, using mesh registration to establish spatial correspondences between a 4D scanned human face and full-body mesh and a 3D CG physical simulator, and training surface deformation as a delta from the physical simulator using machine learning. The deformation for natural animation is able to be predicted and synthesized using the standard MoCAP animation workflow. Machine learning based deformation synthesis and animation using standard MoCAP animation workflow includes using single-view or multi-view 2D videos of MoCAP actors as input, solving 3D model parameters (3D solving) for animation (deformation not included), and given 3D model parameters solved by 3D solving, predicting 4D surface deformation from ML training.
-