Abstract:
In examples, a filter used to denoise shadows for a pixel(s) may be adapted based at least on variance in temporally accumulated ray-traced samples. A range of filter values for a spatiotemporal filter may be defined based on the variance and used to exclude temporal ray-traced samples that are outside of the range. Data used to compute a first moment of a distribution used to compute variance may be used to compute a second moment of the distribution. For binary signals, such as visibility, the first moment (e.g., accumulated mean) may be equivalent to a second moment (e.g., the mean squared). In further respects, spatial filtering of a pixel(s) may be skipped based on comparing the mean of variance of the pixel(s) to one or more thresholds and based on the accumulated number of values for the pixel.
Abstract:
Robust temporal gradients, representing differences in shading results, can be computed between current and previous frames in a temporal denoiser for ray-traced renderers. Backward projection can be used to locate matching surfaces, with the relevant parameters of those surfaces being carried forward and used for patching. Backward projection can be performed for each stratum in a current frame, a stratum representing a set of adjacent pixels. A pixel from each stratum is selected that has a matching surface in the previous frame, using motion vectors generated during the rendering process. A comparison of the depth of the normals, or the visibility buffer data, can be used to determine whether a given surface is the same in the current frame and the previous frame, and if so then parameters of the surface from the previous frame G-buffer is used to patch the G-buffer for the current frame.
Abstract:
Systems and methods of the present disclosure relate to fine grained interleaved rendering applications in path tracing for cloud computing environments. For example, a renderer and a rendering process may be employed for ray or path tracing and image-space filtering that interleaves the pixels of a frame into partial image fields and corresponding reduced-resolution images that are individually processed in parallel. Parallelization techniques described herein may allow for high quality rendered frames in less time, thereby reducing latency (or lag, in gaming applications) in high performance applications.
Abstract:
The disclosure provides a renderer and a rendering process employing ray tracing and image-space filtering that interleaves the pixels of a frame into partial image fields and corresponding reduced-resolution images that are individually processed in parallel. In one example, the renderer includes: (1) an interface configured to receive scene information for rendering a full frame, and (2) a graphics processing system, coupled to the interface, configured to separate pixels of the full frame into different partial image fields that each include a unique set of interleaved pixels, render reduced-resolution images of the full frame by ray tracing the different partial image fields in parallel, independently apply image-space filtering to the reduced-resolution images in parallel, and merge the reduced-resolution images to provide a full rendered frame.
Abstract:
A computing system and method for representing volumetric data for a scene. One embodiment of the computing system includes: (1) a memory configured to store a three-dimensional (3D) clipmap data structure having at least one clip level and at least one mip level, and (2) a processor configured to generate voxelized data for a scene and cause the voxelized data to be stored in the 3D clipmap data structure.
Abstract:
Robust temporal gradients, representing differences in shading results, can be computed between current and previous frames in a temporal denoiser for ray-traced renderers. Backward projection can be used to locate matching surfaces, with the relevant parameters of those surfaces being carried forward and used for patching. Backward projection can be performed for each stratum in a current frame, a stratum representing a set of adjacent pixels. A pixel from each stratum is selected that has a matching surface in the previous frame, using motion vectors generated during the rendering process. A comparison of the depth of the normals, or the visibility buffer data, can be used to determine whether a given surface is the same in the current frame and the previous frame, and if so then parameters of the surface from the previous frame G-buffer is used to patch the G-buffer for the current frame.
Abstract:
Approaches presented herein can reduce temporal lag that may be introduced in a generated image sequence that utilizes temporal accumulation for denoising in dynamic scenes. A fast historical frame can be generated along with a full historical frame generated for a denoising process, with the fast historical frame being accumulated using an exponential moving average with a significantly higher blend weight. This fast history frame can be used to determine a clamping window that can be used to clamp a corresponding full historical value before, or after, reprojection. The fast historical blend weight can be adjusted to control the amount of noise versus temporal lag in an image sequence. In some embodiments, differences between fast and full historical values can also be used to determine an amount of spatial filtering to be applied.
Abstract:
A system for, and method of, computing reduced-resolution indirect illumination using interpolated directional incoming radiance and a graphics processing subsystem incorporating the system or the method. In one embodiment, the system includes: (1) a cone tracing shader executable in a graphics processing unit to compute directional incoming radiance cones for sparse pixels and project the directional incoming radiance cones on a basis and (2) an interpolation shader executable in the graphics processing unit to compute outgoing radiance values for untraced pixels based on directional incoming radiance values for neighboring ones of the sparse pixels.
Abstract:
The disclosure provides a virtual view broadcaster, a cloud-based renderer, and a method of providing stereoscopic images. In one embodiment, the method includes (1) generating a monoscopic set of rendered images and (2) converting the set of rendered images into a stereoscopic pair of images employing depth information from the monoscopic set of rendered images and raymarching.
Abstract:
A graphics processing subsystem and method for computing a 3D clipmap. One embodiment of the subsystem includes: (1) a renderer operable to render a primitive surface representable by a 3D clipmap, (2) a geometry shader (GS) configured to select respective major-plane viewports for a plurality of clipmap levels, the major-plane viewports being sized to represent full spatial extents of the 3D clipmap relative to a render target (RT) for the plurality of clipmap levels, (3) a rasterizer configured to employ the respective major-plane viewports and the RT to rasterize a projection of the primitive surface onto a major plane corresponding to the respective major-plane viewports into pixels representing fragments of the primitive surface for each of the plurality of clipmap levels, and (4) a plurality of pixel shader (PS) instances configured to transform the fragments into respective voxels in the plurality of clipmap levels, thereby voxelizing the primitive surface.