Abstract:
An apparatus and method are described for efficiently rendering an transmitting to a remote display. For example, one embodiment of a remote display apparatus comprises: a display engine to render a sequence of video images; an encoder to compress the sequence of video images to generate a sequence of compressed video images; a network interface controller to transmit the compressed video images over a network link to a remote display; a plurality of buffer pointer registers to store read pointers and write pointers identifying read locations and write locations, respectively, in a frame buffer and a compressed stream buffer; a central processing unit (CPU) to initialize the read pointers and write pointers for processing one or more of the video images; and the display engine to access a first write pointer to write to a specified location in the frame buffer, the encoder to begin reading from the frame buffer based on a first read pointer value, the encoder to write to the compressed stream buffer based on a second write pointer value, and the network interface controller to read from the compressed stream buffer based on a second read pointer value, the first and second write and read pointer values to be updated without intervention from the CPU as the display engine writes to the frame buffer, the encoder reads from the frame buffer and writes to the compressed stream buffer, and the network interface controller reads from the compressed stream buffer.
Abstract:
In an embodiment, a system includes a graphics processing unit (GPU) that includes one or more GPU engines, and a microcontroller. The microcontroller is to assign a respective schedule slot for each of a plurality of virtual machines (VMs). When a particular VM is scheduled to access a first GPU engine, the particular VM has exclusive access to the first GPU engine. Other embodiments are described and claimed.
Abstract:
A mechanism is described for facilitating recognition, reidentification, and security in machine learning at autonomous machines. A method of embodiments, as described herein, includes facilitating a camera to detect one or more objects within a physical vicinity, the one or more objects including a person, and the physical vicinity including a house, where detecting includes capturing one or more images of one or more portions of a body of the person. The method may further include extracting body features based on the one or more portions of the body, comparing the extracted body features with feature vectors stored at a database, and building a classification model based on the extracted body features over a period of time to facilitate recognition or reidentification of the person independent of facial recognition of the person.
Abstract:
Methods and apparatus relating to scalar core integration in a graphics processor. In an example, an apparatus comprises a processor to receive a set of workload instructions for a graphics workload from a host complex, determine a first subset of operations in the set of operations that is suitable for execution by a scalar processor complex of the graphics processing device and a second subset of operations in the set of operations that is suitable for execution by a vector processor complex of the graphics processing device, assign the first subset of operations to the scalar processor complex for execution to generate a first set of outputs, assign the second subset of operations to the vector processor complex for execution to generate a second set of outputs. Other embodiments are also disclosed and claimed.
Abstract:
Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive a completion acknowledgment from the plurality of graphics processing units and in response to a determination that the workload is finished, to terminate one or more communication connections on the interconnect bridge. Other embodiments are also disclosed and claimed.
Abstract:
An apparatus and method are described for allocating local memories to virtual machines. For example, one embodiment of an apparatus comprises: a command streamer to queue commands from a plurality of virtual machines (VMs) or applications, the commands to be distributed from the command streamer and executed by graphics processing resources of a graphics processing unit (GPU); a tile cache to store graphics data associated with the plurality of VMs or applications as the commands are executed by the graphics processing resources; and tile cache allocation hardware logic to allocate a first portion of the tile cache to a first VM or application and a second portion of the tile cache to a second VM or application; the tile cache allocation hardware logic to further allocate a first region in system memory to store spill-over data when the first portion of the tile cache and/or the second portion of the file cache becomes full.
Abstract:
Embodiments described herein provided for an instruction and associated logic to enable a processing resource including a tensor accelerator to perform optimized computation of sparse submatrix operations. One embodiment provides hardware logic to apply a numerical transform to matrix data to increase the sparsity of the data. Increasing the sparsity may result in a higher compression ratio when the matrix data is compressed.
Abstract:
Methods and apparatus relating to predictive page fault handling. In an example, an apparatus comprises a processor to receive a virtual address that triggered a page fault for a compute process, check a virtual memory space for a virtual memory allocation for the compute process that triggered the page fault and manage the page fault according to one of a first protocol in response to a determination that the virtual address that triggered the page fault is a last page in the virtual memory allocation for the compute process, or a second protocol in response to a determination that the virtual address that triggered the page fault is not a last page in the virtual memory allocation for the compute process. Other embodiments are also disclosed and claimed.
Abstract:
Apparatus and method for scalable content protection. For example, one embodiment of an apparatus comprises: cryptographic management circuitry to securely store one or more keys associated with one or more media apps/applications; a plurality of processing engines, each processing engine comprising circuitry to process media content of the one or more media apps/applications; and a scheduler to schedule processing of the media content by the processing engines; wherein the cryptographic management circuitry is to restore a first cryptographic state including a first key associated with a first media app/application and/or first media content responsive to a request to process the first media content on a first processing engine.
Abstract:
One embodiment provides for a computing device within an autonomous vehicle, the compute device comprising a wireless network device to enable a wireless data connection with an autonomous vehicle network, a set of multiple processors including a general-purpose processor and a general-purpose graphics processor, the set of multiple processors to execute a compute manager to manage execution of compute workloads associated with the autonomous vehicle, the compute workload associated with autonomous operations of the autonomous vehicle, and offload logic configured to execute on the set of multiple processors, the offload logic to determine to offload one or more of the compute workloads to one or more autonomous vehicles within range of the wireless network device.