-
公开(公告)号:US20240020915A1
公开(公告)日:2024-01-18
申请号:US18353213
申请日:2023-07-17
Applicant: Google LLC
Inventor: Yinda Zhang , Feitong Tan , Sean Ryan Francesco Fanello , Abhimitra Meka , Sergio Orts Escolano , Danhang Tang , Rohit Kumar Pandey , Jonathan James Taylor
Abstract: Techniques include introducing a neural generator configured to produce novel faces that can be rendered at free camera viewpoints (e.g., at any angle with respect to the camera) and relit under an arbitrary high dynamic range (HDR) light map. A neural implicit intrinsic field takes a randomly sampled latent vector as input and produces as output per-point albedo, volume density, and reflectance properties for any queried 3D location. These outputs are aggregated via a volumetric rendering to produce low resolution albedo, diffuse shading, specular shading, and neural feature maps. The low resolution maps are then upsampled to produce high resolution maps and input into a neural renderer to produce relit images.
-
公开(公告)号:US11868583B2
公开(公告)日:2024-01-09
申请号:US17656818
申请日:2022-03-28
Applicant: Google LLC
Inventor: Ruofei Du , Alex Olwal , Mathieu Simon Le Goc , David Kim , Danhang Tang
IPC: G06F3/04815 , G02B27/01 , G10L15/22
CPC classification number: G06F3/04815 , G02B27/0172 , G02B27/0176 , G10L15/22 , G02B2027/0178 , G10L2015/223
Abstract: Systems and methods are provided in which physical objects in the ambient environment can function as user interface implements in an augmented reality environment. A physical object detected within a field of view of a camera of a computing device may be designated as a user interface implement in response to a user command. User interfaces may be attached to the designated physical object, to provide a tangible user interface implement for user interaction with the augmented reality environment.
-
公开(公告)号:US20220065620A1
公开(公告)日:2022-03-03
申请号:US17413847
申请日:2020-11-11
Applicant: GOOGLE LLC
Inventor: Sean Ryan Francesco Fanello , Kaiwen Guo , Peter Christopher Lincoln , Philip Lindsley Davidson , Jessica L. Busch , Xueming Yu , Geoffrey Harvey , Sergio Orts Escolano , Rohit Kumar Pandey , Jason Dourgarian , Danhang Tang , Adarsh Prakash Murthy Kowdle , Emily B. Cooper , Mingsong Dou , Graham Fyffe , Christoph Rhemann , Jonathan James Taylor , Shahram Izadi , Paul Ernest Debevec
IPC: G01B11/25 , G06T15/50 , G01B11/245 , G06T17/20
Abstract: A lighting stage includes a plurality of lights that project alternating spherical color gradient illumination patterns onto an object or human performer at a predetermined frequency. The lighting stage also includes a plurality of cameras that capture images of an object or human performer corresponding to the alternating spherical color gradient illumination patterns. The lighting stage also includes a plurality of depth sensors that capture depth maps of the object or human performer at the predetermined frequency. The lighting stage also includes (or is associated with) one or more processors that implement a machine learning algorithm to produce a three-dimensional (3D) model of the object or human performer. The 3D model includes relighting parameters used to relight the 3D model under different lighting conditions.
-
公开(公告)号:US11030773B2
公开(公告)日:2021-06-08
申请号:US16798881
申请日:2020-02-24
Applicant: Google LLC
Inventor: Jonathan James Taylor , Vladimir Tankovich , Danhang Tang , Cem Keskin , Adarsh Prakash Murthy Kowdle , Philip L. Davidson , Shahram Izadi , David Kim
Abstract: An electronic device estimates a pose of a hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose. The electronic device identifies a pose of the hand by fitting an implicit surface model of a hand to the pixels of a depth image that correspond to the hand. The electronic device uses a skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. The electronic device then minimizes an energy function based on the distance of each corresponding pixel to identify the candidate pose that most closely approximates the pose of the hand.
-
-
-