NEURAL RENDERING FOR INVERSE GRAPHICS GENERATION

    公开(公告)号:US20210279952A1

    公开(公告)日:2021-09-09

    申请号:US17193405

    申请日:2021-03-05

    Abstract: Approaches are presented for training an inverse graphics network. An image synthesis network can generate training data for an inverse graphics network. In turn, the inverse graphics network can teach the synthesis network about the physical three-dimensional (3D) controls. Such an approach can provide for accurate 3D reconstruction of objects from 2D images using the trained inverse graphics network, while requiring little annotation of the provided training data. Such an approach can extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers, enabling a disentangled generative model to function as a controllable 3D “neural renderer,” complementing traditional graphics renderers.

    NEURAL RENDERING FOR INVERSE GRAPHICS GENERATION

    公开(公告)号:US20230134690A1

    公开(公告)日:2023-05-04

    申请号:US17981770

    申请日:2022-11-07

    Abstract: Approaches are presented for training an inverse graphics network. An image synthesis network can generate training data for an inverse graphics network. In turn, the inverse graphics network can teach the synthesis network about the physical three-dimensional (3D) controls. Such an approach can provide for accurate 3D reconstruction of objects from 2D images using the trained inverse graphics network, while requiring little annotation of the provided training data. Such an approach can extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers, enabling a disentangled generative model to function as a controllable 3D “neural renderer,” complementing traditional graphics renderers.

    CHARACTERISTIC-BASED ACCELERATION FOR EFFICIENT SCENE RENDERING

    公开(公告)号:US20250095275A1

    公开(公告)日:2025-03-20

    申请号:US18630480

    申请日:2024-04-09

    Abstract: In various examples, images (e.g., novel views) of an object may be rendered using an optimized number of samples of a 3D representation of the object. The optimized number of the samples may be determined based at least on casting rays into a scene that includes the 3D representation of the object and/or an acceleration data structure corresponding to the object. The acceleration data structure may include features corresponding to characteristics of the object, and the features may be indicative of the number of samples to be obtained from various portions of the 3D representation of the object to render the images. In some examples, the 3D representation may be a neural radiance field that includes, as a neural output, a spatially varying kernel size predicting the characteristics of the object, and the features of the acceleration data structure may be related to the spatially varying kernel size.

    SYNTHESIZING HIGH RESOLUTION 3D SHAPES FROM LOWER RESOLUTION REPRESENTATIONS FOR SYNTHETIC DATA GENERATION SYSTEMS AND APPLICATIONS

    公开(公告)号:US20220392162A1

    公开(公告)日:2022-12-08

    申请号:US17718172

    申请日:2022-04-11

    Abstract: In various examples, a deep three-dimensional (3D) conditional generative model is implemented that can synthesize high resolution 3D shapes using simple guides—such as coarse voxels, point clouds, etc.—by marrying implicit and explicit 3D representations into a hybrid 3D representation. The present approach may directly optimize for the reconstructed surface, allowing for the synthesis of finer geometric details with fewer artifacts. The systems and methods described herein may use a deformable tetrahedral grid that encodes a discretized signed distance function (SDF) and a differentiable marching tetrahedral layer that converts the implicit SDF representation to an explicit surface mesh representation. This combination allows joint optimization of the surface geometry and topology as well as generation of the hierarchy of subdivisions using reconstruction and adversarial losses defined explicitly on the surface mesh.

Patent Agency Ranking