-
公开(公告)号:US12243152B2
公开(公告)日:2025-03-04
申请号:US18441486
申请日:2024-02-14
Applicant: NVIDIA Corporation
Inventor: Wenzheng Chen , Joey Litalien , Jun Gao , Zian Wang , Clement Tse Tsian Christophe Louis Fuji Tsang , Sameh Khamis , Or Litany , Sanja Fidler
Abstract: In various examples, information may be received for a 3D model, such as 3D geometry information, lighting information, and material information. A machine learning model may be trained to disentangle the 3D geometry information, the lighting information, and/or material information from input data to provide the information, which may be used to project geometry of the 3D model onto an image plane to generate a mapping between pixels and portions of the 3D model. Rasterization may then use the mapping to determine which pixels are covered and in what manner, by the geometry. The mapping may also be used to compute radiance for points corresponding to the one or more 3D models using light transport simulation. Disclosed approaches may be used in various applications, such as image editing, 3D model editing, synthetic data generation, and/or data set augmentation.
-
2.
公开(公告)号:US20240185506A1
公开(公告)日:2024-06-06
申请号:US18441486
申请日:2024-02-14
Applicant: NVIDIA Corporation
Inventor: Wenzheng Chen , Joey Litalien , Jun Gao , Zian Wang , Clement Tse Tsian Christophe Louis Fuji Tsang , Sameh Khamis , Or Litany , Sanja Fidler
CPC classification number: G06T15/06 , G06T15/506 , G06T19/20 , G06T2219/2012
Abstract: In various examples, information may be received for a 3D model, such as 3D geometry information, lighting information, and material information. A machine learning model may be trained to disentangle the 3D geometry information, the lighting information, and/or material information from input data to provide the information, which may be used to project geometry of the 3D model onto an image plane to generate a mapping between pixels and portions of the 3D model. Rasterization may then use the mapping to determine which pixels are covered and in what manner, by the geometry. The mapping may also be used to compute radiance for points corresponding to the one or more 3D models using light transport simulation. Disclosed approaches may be used in various applications, such as image editing, 3D model editing, synthetic data generation, and/or data set augmentation.
-
公开(公告)号:US20230140460A1
公开(公告)日:2023-05-04
申请号:US17827918
申请日:2022-05-30
Applicant: NVIDIA Corporation
Inventor: Carl Jacob Munkberg , Jon Niklas Theodor Hasselgren , Tianchang Shen , Jun Gao , Wenzheng Chen , Alex John Bauld Evans , Thomas Müller-Höhne , Sanja Fidler
Abstract: A technique is described for extracting or constructing a three-dimensional (3D) model from multiple two-dimensional (2D) images. In an embodiment, a foreground segmentation mask or depth field may be provided as an additional supervision input with each 2D image. In an embodiment, the foreground segmentation mask or depth field is automatically generated for each 2D image. The constructed 3D model comprises a triangular mesh topology, materials, and environment lighting. The constructed 3D model is represented in a format that can be directly edited and/or rendered by conventional application programs, such as digital content creation (DCC) tools. For example, the constructed 3D model may be represented as a triangular surface mesh (with arbitrary topology), a set of 2D textures representing spatially-varying material parameters, and an environment map. Furthermore, the constructed 3D model may be included in 3D scenes and interacts realistically with other objects.
-
公开(公告)号:US20210279952A1
公开(公告)日:2021-09-09
申请号:US17193405
申请日:2021-03-05
Applicant: Nvidia Corporation
Inventor: Wenzheng Chen , Yuxuan Zhang , Sanja Fidler , Huan Ling , Jun Gao , Antonio Torralba Barriuso
Abstract: Approaches are presented for training an inverse graphics network. An image synthesis network can generate training data for an inverse graphics network. In turn, the inverse graphics network can teach the synthesis network about the physical three-dimensional (3D) controls. Such an approach can provide for accurate 3D reconstruction of objects from 2D images using the trained inverse graphics network, while requiring little annotation of the provided training data. Such an approach can extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers, enabling a disentangled generative model to function as a controllable 3D “neural renderer,” complementing traditional graphics renderers.
-
公开(公告)号:US20240096017A1
公开(公告)日:2024-03-21
申请号:US17895793
申请日:2022-08-25
Applicant: Nvidia Corporation
Inventor: Jun Gao , Tianchang Shen , Zan Gojcic , Wenzheng Chen , Zian Wang , Daiqing Li , Or Litany , Sanja Fidler
IPC: G06T17/20
CPC classification number: G06T17/20 , G06T2207/10024 , G06T2207/20084
Abstract: Apparatuses, systems, and techniques are presented to generate digital content. In at least one embodiment, one or more neural networks are used to generate one or more textured three-dimensional meshes corresponding to one or more objects based, at least in part, one or more two-dimensional images of the one or more objects.
-
公开(公告)号:US20220383582A1
公开(公告)日:2022-12-01
申请号:US17826611
申请日:2022-05-27
Applicant: NVIDIA Corporation
Inventor: Wenzheng Chen , Joey Litalien , Jun Gao , Zian Wang , Clement Tse Tsian Christophe Louis Fuji Tsang , Sameh Khamis , Or Litany , Sanja Fidler
Abstract: In various examples, information may be received for a 3D model, such as 3D geometry information, lighting information, and material information. A machine learning model may be trained to disentangle the 3D geometry information, the lighting information, and/or material information from input data to provide the information, which may be used to project geometry of the 3D model onto an image plane to generate a mapping between pixels and portions of the 3D model. Rasterization may then use the mapping to determine which pixels are covered and in what manner, by the geometry. The mapping may also be used to compute radiance for points corresponding to the one or more 3D models using light transport simulation. Disclosed approaches may be used in various applications, such as image editing, 3D model editing, synthetic data generation, and/or data set augmentation.
-
公开(公告)号:US20220083807A1
公开(公告)日:2022-03-17
申请号:US17020649
申请日:2020-09-14
Applicant: NVIDIA Corporation
Inventor: Yuxuan Zhang , Huan Ling , Jun Gao , Wenzheng Chen , Antonio Torralba Barriuso , Sanja Fidler
Abstract: Apparatuses, systems, and techniques to determine pixel-level labels of a synthetic image. In at least one embodiment, the synthetic image is generated by one or more generative networks and the pixel-level labels are generated using a combination of data output by a plurality of layers of the generative networks.
-
8.
公开(公告)号:US20240362897A1
公开(公告)日:2024-10-31
申请号:US18634134
申请日:2024-04-12
Applicant: NVIDIA Corporation
Inventor: Tzofi Klinghoffer , Jonah Philion , Zan Gojcic , Sanja Fidler , Or Litany , Wenzheng Chen , Jose Manuel Alvarez Lopez
IPC: G06V10/774 , G06T7/55 , G06T15/20
CPC classification number: G06V10/774 , G06T7/55 , G06T15/205 , G06T2207/10016 , G06T2207/20081 , G06T2207/20084 , G06T2207/30181 , G06T2207/30252
Abstract: In various examples, systems and methods are disclosed relating to synthetic data generation using viewpoint augmentation for autonomous and semi-autonomous systems and applications. One or more circuits can identify a set of sequential images corresponding to a first viewpoint and generate a first transformed image corresponding to a second viewpoint using a first image of the set of sequential images as input to a machine-learning model. The one or more circuits can update the machine-learning model based at least on a loss determined according to the first transformed image and a second image of the set of sequential images.
-
公开(公告)号:US11967024B2
公开(公告)日:2024-04-23
申请号:US17827918
申请日:2022-05-30
Applicant: NVIDIA Corporation
Inventor: Carl Jacob Munkberg , Jon Niklas Theodor Hasselgren , Tianchang Shen , Jun Gao , Wenzheng Chen , Alex John Bauld Evans , Thomas Müller-Höhne , Sanja Fidler
CPC classification number: G06T17/205 , G06N3/084 , G06T9/002 , G06T15/04 , G06T15/506 , G06T19/00 , G06T2210/36
Abstract: A technique is described for extracting or constructing a three-dimensional (3D) model from multiple two-dimensional (2D) images. In an embodiment, a foreground segmentation mask or depth field may be provided as an additional supervision input with each 2D image. In an embodiment, the foreground segmentation mask or depth field is automatically generated for each 2D image. The constructed 3D model comprises a triangular mesh topology, materials, and environment lighting. The constructed 3D model is represented in a format that can be directly edited and/or rendered by conventional application programs, such as digital content creation (DCC) tools. For example, the constructed 3D model may be represented as a triangular surface mesh (with arbitrary topology), a set of 2D textures representing spatially-varying material parameters, and an environment map. Furthermore, the constructed 3D model may be included in 3D scenes and interacts realistically with other objects.
-
10.
公开(公告)号:US20240054720A1
公开(公告)日:2024-02-15
申请号:US17886081
申请日:2022-08-11
Applicant: Nvidia Corporation
Inventor: Sanja Fidler , Zian Wang , Jan Kautz , Wenzheng Chen
CPC classification number: G06T15/506 , G06T5/009 , G06T7/586 , G06T2207/20081 , G06T2207/20208
Abstract: Systems and methods generate a hybrid lighting model for rendering objects within an image. The hybrid lighting model includes lighting effects attributed to a first source, such as the sun, and to a second source, such as spatially-varying effects of objects within the image. The hybrid lighting model may be generated for an input image and then one or more virtual objects may be rendered to appear as if part of the input image, where the hybrid lighting model is used to apply one or more lighting effects to the one or more virtual objects.
-
-
-
-
-
-
-
-
-