CHARACTERISTIC-BASED ACCELERATION FOR EFFICIENT SCENE RENDERING

    公开(公告)号:US20250095275A1

    公开(公告)日:2025-03-20

    申请号:US18630480

    申请日:2024-04-09

    Abstract: In various examples, images (e.g., novel views) of an object may be rendered using an optimized number of samples of a 3D representation of the object. The optimized number of the samples may be determined based at least on casting rays into a scene that includes the 3D representation of the object and/or an acceleration data structure corresponding to the object. The acceleration data structure may include features corresponding to characteristics of the object, and the features may be indicative of the number of samples to be obtained from various portions of the 3D representation of the object to render the images. In some examples, the 3D representation may be a neural radiance field that includes, as a neural output, a spatially varying kernel size predicting the characteristics of the object, and the features of the acceleration data structure may be related to the spatially varying kernel size.

    SYNTHESIZING HIGH RESOLUTION 3D SHAPES FROM LOWER RESOLUTION REPRESENTATIONS FOR SYNTHETIC DATA GENERATION SYSTEMS AND APPLICATIONS

    公开(公告)号:US20220392162A1

    公开(公告)日:2022-12-08

    申请号:US17718172

    申请日:2022-04-11

    Abstract: In various examples, a deep three-dimensional (3D) conditional generative model is implemented that can synthesize high resolution 3D shapes using simple guides—such as coarse voxels, point clouds, etc.—by marrying implicit and explicit 3D representations into a hybrid 3D representation. The present approach may directly optimize for the reconstructed surface, allowing for the synthesis of finer geometric details with fewer artifacts. The systems and methods described herein may use a deformable tetrahedral grid that encodes a discretized signed distance function (SDF) and a differentiable marching tetrahedral layer that converts the implicit SDF representation to an explicit surface mesh representation. This combination allows joint optimization of the surface geometry and topology as well as generation of the hierarchy of subdivisions using reconstruction and adversarial losses defined explicitly on the surface mesh.

    EXTRACTING TRIANGULAR 3-D MODELS, MATERIALS, AND LIGHTING FROM IMAGES

    公开(公告)号:US20230140460A1

    公开(公告)日:2023-05-04

    申请号:US17827918

    申请日:2022-05-30

    Abstract: A technique is described for extracting or constructing a three-dimensional (3D) model from multiple two-dimensional (2D) images. In an embodiment, a foreground segmentation mask or depth field may be provided as an additional supervision input with each 2D image. In an embodiment, the foreground segmentation mask or depth field is automatically generated for each 2D image. The constructed 3D model comprises a triangular mesh topology, materials, and environment lighting. The constructed 3D model is represented in a format that can be directly edited and/or rendered by conventional application programs, such as digital content creation (DCC) tools. For example, the constructed 3D model may be represented as a triangular surface mesh (with arbitrary topology), a set of 2D textures representing spatially-varying material parameters, and an environment map. Furthermore, the constructed 3D model may be included in 3D scenes and interacts realistically with other objects.

Patent Agency Ranking