Joint shape and appearance optimization through topology sampling

    公开(公告)号:US11610370B2

    公开(公告)日:2023-03-21

    申请号:US17459223

    申请日:2021-08-27

    Abstract: Systems and methods enable optimization of a 3D model representation comprising the shape and appearance of a particular 3D scene or object. The opaque 3D mesh (e.g., vertex positions and corresponding topology) and spatially varying material attributes are jointly optimized based on image space losses to match multiple image observations (e.g., reference images of the reference 3D scene or object). A geometric topology defines faces and/or cells in the opaque 3D mesh that are visible and may be randomly initialized and optimized through training based on the image space losses. Applying the geometry topology to an opaque 3D mesh for learning the shape improves accuracy of silhouette edges and performance compared with using transparent mesh representations. In contrast with approaches that require an initial guess for the topology and/or an exhaustive testing of possible geometric topologies, the 3D model representation is learned based on image space differences without requiring an initial guess.

    JOINT SHAPE AND APPEARANCE OPTIMIZATION THROUGH TOPOLOGY SAMPLING

    公开(公告)号:US20220392160A1

    公开(公告)日:2022-12-08

    申请号:US17459223

    申请日:2021-08-27

    Abstract: Systems and methods enable optimization of a 3D model representation comprising the shape and appearance of a particular 3D scene or object. The opaque 3D mesh (e.g., vertex positions and corresponding topology) and spatially varying material attributes are jointly optimized based on image space losses to match multiple image observations (e.g., reference images of the reference 3D scene or object). A geometric topology defines faces and/or cells in the opaque 3D mesh that are visible and may be randomly initialized and optimized through training based on the image space losses. Applying the geometry topology to an opaque 3D mesh for learning the shape improves accuracy of silhouette edges and performance compared with using transparent mesh representations. In contrast with approaches that require an initial guess for the topology and/or an exhaustive testing of possible geometric topologies, the 3D model representation is learned based on image space differences without requiring an initial guess.

    APPEARANCE-DRIVEN AUTOMATIC THREE-DIMENSIONAL MODELING

    公开(公告)号:US20220165040A1

    公开(公告)日:2022-05-26

    申请号:US17194477

    申请日:2021-03-08

    Abstract: Appearance driven automatic three-dimensional (3D) modeling enables optimization of a 3D model comprising the shape and appearance of a particular 3D scene or object. Triangle meshes and shading models may be jointly optimized to match the appearance of a reference 3D model based on reference images of the reference 3D model. Compared with the reference 3D model, the optimized 3D model is a lower resolution 3D model that can be rendered in less time. More specifically, the optimized 3D model may include fewer geometric primitives compared with the reference 3D model. In contrast with the conventional inverse rendering or analysis-by-synthesis modeling tools, the shape and appearance representations of the 3D model are automatically generated that, when rendered, match the reference images. Appearance driven automatic 3D modeling has a number of uses, including appearance-preserving simplification of extremely complex assets, conversion between rendering systems, and even conversion between geometric scene representations.

    Noise-free differentiable ray casting

    公开(公告)号:US12112422B2

    公开(公告)日:2024-10-08

    申请号:US17840791

    申请日:2022-06-15

    CPC classification number: G06T15/06 G06T7/13 G06T15/005

    Abstract: A differentiable ray casting technique may be applied to a model of a three-dimensional (3D) scene (scene includes lighting configuration) or object to optimize one or more parameters of the model. The one or more parameters define geometry (topology and shape), materials, and lighting configuration (e.g., environment map, a high-resolution texture that represents the light coming from all directions in a sphere) for the model. Visibility is computed in 3D space by casting at least two rays from each ray origin (where the two rays define a ray cone). The model is rendered to produce a model image that may be compared with a reference image (or photograph) of a reference 3D scene to compute image space differences. Visibility gradients in 3D space are computed and backpropagated through the computations to reduce differences between the model image and the reference image.

    JOINT NEURAL DENOISING OF SURFACES AND VOLUMES

    公开(公告)号:US20240112308A1

    公开(公告)日:2024-04-04

    申请号:US18178817

    申请日:2023-03-06

    CPC classification number: G06T5/002 G06T5/20 G06T15/06 G06T2207/20084

    Abstract: Denoising images rendered using Monte Carlo sampled ray tracing is an important technique for improving the image quality when low sample counts are used. Ray traced scenes that include volumes in addition to surface geometry are more complex, and noisy when low sample counts are used to render in real-time. Joint neural denoising of surfaces and volumes enables combined volume and surface denoising in real time from low sample count renderings. At least one rendered image is decomposed into volume and surface layers, leveraging spatio-temporal neural denoisers for both the surface and volume components. The individual denoised surface and volume components are composited using learned weights and denoised transmittance. A surface and volume denoiser architecture outperforms current denoisers in scenes containing both surfaces and volumes, and produces temporally stable results at interactive rates.

    Motion blur and depth of field reconstruction through temporally stable neural networks

    公开(公告)号:US10970816B2

    公开(公告)日:2021-04-06

    申请号:US16422601

    申请日:2019-05-24

    Abstract: A neural network structure, namely a warped external recurrent neural network, is disclosed for reconstructing images with synthesized effects. The effects can include motion blur, depth of field reconstruction (e.g., simulating lens effects), and/or anti-aliasing (e.g., removing artifacts caused by sampling frequency). The warped external recurrent neural network is not recurrent at each layer inside the neural network. Instead, the external state output by the final layer of the neural network is warped and provided as a portion of the input to the neural network for the next image in a sequence of images. In contrast, in a conventional recurrent neural network, hidden state generated at each layer is provided as a feedback input to the generating layer. The neural network can be implemented, at least in part, on a processor. In an embodiment, the neural network is implemented on at least one parallel processing unit.

    EXTRACTING TRIANGULAR 3-D MODELS, MATERIALS, AND LIGHTING FROM IMAGES

    公开(公告)号:US20230140460A1

    公开(公告)日:2023-05-04

    申请号:US17827918

    申请日:2022-05-30

    Abstract: A technique is described for extracting or constructing a three-dimensional (3D) model from multiple two-dimensional (2D) images. In an embodiment, a foreground segmentation mask or depth field may be provided as an additional supervision input with each 2D image. In an embodiment, the foreground segmentation mask or depth field is automatically generated for each 2D image. The constructed 3D model comprises a triangular mesh topology, materials, and environment lighting. The constructed 3D model is represented in a format that can be directly edited and/or rendered by conventional application programs, such as digital content creation (DCC) tools. For example, the constructed 3D model may be represented as a triangular surface mesh (with arbitrary topology), a set of 2D textures representing spatially-varying material parameters, and an environment map. Furthermore, the constructed 3D model may be included in 3D scenes and interacts realistically with other objects.

    Appearance-driven automatic three-dimensional modeling

    公开(公告)号:US11615602B2

    公开(公告)日:2023-03-28

    申请号:US17888207

    申请日:2022-08-15

    Abstract: Appearance driven automatic three-dimensional (3D) modeling enables optimization of a 3D model comprising the shape and appearance of a particular 3D scene or object. Triangle meshes and shading models may be jointly optimized to match the appearance of a reference 3D model based on reference images of the reference 3D model. Compared with the reference 3D model, the optimized 3D model is a lower resolution 3D model that can be rendered in less time. More specifically, the optimized 3D model may include fewer geometric primitives compared with the reference 3D model. In contrast with the conventional inverse rendering or analysis-by-synthesis modeling tools, the shape and appearance representations of the 3D model are automatically generated that, when rendered, match the reference images. Appearance driven automatic 3D modeling has a number of uses, including appearance-preserving simplification of extremely complex assets, conversion between rendering systems, and even conversion between geometric scene representations.

Patent Agency Ranking