Abstract:
Systems, apparatuses and methods for technology that provides smart work spaces in ubiquitous computing environments. The technology may determine a task to be performed in a smart work space and perform task modeling, wherein the task modeling includes determining one or more user interfaces involved with the task. One or more placements may be determined for the one or more user interfaces based on one or more ergonomic conditions, an incidence of an interaction, and a length of time of interaction. The technology may position the one or more user interfaces into the smart work space in accordance with the determined one or more placements.
Abstract:
Methods, apparatus, and articles of manufacture to provide extended graphics processing capabilities are disclosed. A disclosed example method involves sending a display panel parameter to a shared library module. The display panel parameter is sent by a programmable driver interface in communication between the shared library module and a graphics hardware device driver. The shared library module includes a first graphics processing capability. The graphics hardware device driver includes a second graphics processing capability different from the first graphics processing capability. The example method also involves performing a render operation via the programmable driver interface on a frame buffer based on the first graphics processing capability. The first graphics processing capability is received at the programmable driver interface from the shared library module based on the display panel parameter. The frame buffer is output to a display.
Abstract:
Methods, apparatus, and articles of manufacture to provide extended graphics processing capabilities are disclosed. A disclosed example method involves sending a display panel parameter to a shared library module. The display panel parameter is sent by a programmable driver interface in communication between the shared library module and a graphics hardware device driver. The shared library module includes a first graphics processing capability. The graphics hardware device driver includes a second graphics processing capability different from the first graphics processing capability. The example method also involves performing a render operation via the programmable driver interface on a frame buffer based on the first graphics processing capability. The first graphics processing capability is received at the programmable driver interface from the shared library module based on the display panel parameter. The frame buffer is output to a display.
Abstract:
An apparatus to facilitate video motion smoothing is disclosed. The apparatus comprises one or more processors including a graphics processor, the one or more processors including circuitry configured to receive a video stream, decode the video stream to generate a motion vector map and a plurality of video image frames, analyze the motion vector map to detect a plurality of candidate frames, wherein the plurality of candidate frames comprise a period of discontinuous motion in the plurality of video image frames and the plurality of candidate frames are determined based on a classification generated via a convolutional neural network (CNN), generate, via a generative adversarial network (GAN), one or more synthetic frames based on the plurality of candidate frames, insert the one or more synthetic frames between the plurality of candidate frames to generate up-sampled video frames and transmit the up-sampled video frames for display.
Abstract:
An apparatus to facilitate inferred object shading is disclosed. The apparatus comprises one or more processors to receive rasterized pixel data and hierarchical data associated with one or more objects and perform an inferred shading operation on the rasterized pixel data, including using one or more trained neural networks to perform texture and lighting on the rasterized pixel data to generate a pixel output, wherein the one or more trained neural networks uses the hierarchical data to learn a three-dimensional (3D) geometry, latent space and representation of the one or more objects.
Abstract:
An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to facilitate receiving a manifest corresponding to graph nodes representing regions of memory of a remote client machine, the graph nodes corresponding to a command buffer and to associated data structures and kernels of the command buffer used to initialize a hardware accelerator and execute the kernels, and the manifest indicating a destination memory location of each of the graph nodes and dependencies of each of the graph nodes; identifying, based on the manifest, the command buffer and the associated data structures to copy to the host memory; identifying, based on the manifest, the kernels to copy to local memory of the hardware accelerator; and patching addresses in the command buffer copied to the host memory with updated addresses of corresponding locations in the host memory.
Abstract:
Multi-tile Memory Management for Detecting Cross Tile Access, Providing Multi-Tile Inference Scaling with multicasting of data via copy operation, and Providing Page Migration are disclosed herein. In one embodiment, a graphics processor for a multi-tile architecture includes a first graphics processing unit (GPU) having a memory and a memory controller, a second graphics processing unit (GPU) having a memory and a cross-GPU fabric to communicatively couple the first and second GPUs. The memory controller is configured to determine whether frequent cross tile memory accesses occur from the first GPU to the memory of the second GPU in the multi-GPU configuration and to send a message to initiate a data transfer mechanism when frequent cross tile memory accesses occur from the first GPU to the memory of the second GPU.
Abstract:
Described herein are techniques to render frame data in UV space and process the UV space data via a machine learning model. One embodiment provides an apparatus including a parallel processor having first circuitry configured to execute operations associated with a three-dimensional (3D) application programming interface (API) to render scene data for a frame in a UV coordinate space, second circuitry configured to execute instructions to perform a matrix multiply accumulate operation associated with a machine learning model that is trained to process the scene data in the UV coordinate space to generate processed scene data in the UV coordinate space, and third circuitry to rasterize the processed scene data in the UV coordinate space into a screen space representation of the scene data.
Abstract:
Described herein, in one embodiment, is computer-implemented method of estimating dense optical flow using primitive ID data comprising generating a primitive ID buffer (e.g., triangle ID buffer) for a frame during rendering, rendering the frame with a unique color for each primitive ID, providing a rendered primitive ID buffer for the current and previous frame along with a confidence buffer and sparse optical flow between frames to a machine learning model, and generating an estimated dense optical flow via the machine learning model based on the input.
Abstract:
Embodiments described herein include software, firmware, and hardware that provides techniques to enable deterministic scheduling across multiple general-purpose graphics processing units. One embodiment provides a multi-GPU architecture with uniform latency. One embodiment provides techniques to distribute memory output based on memory chip thermals. One embodiment provides techniques to enable thermally aware workload scheduling. One embodiment provides techniques to enable end to end contracts for workload scheduling on multiple GPUs.