摘要:
Graphics processing systems and methods are described. A graphics processing apparatus may comprise one or more graphics processing cores, a shared buffer accessible to a user mode driver (UMD) associated with an application in an unprivileged domain, the UMD to write one or more commands to the shared buffer, and a controller parse a workload in the shared buffer to identify one or more commands in the workload, the workload added by the application executing in the unprivileged domain, associate a trigger with a command in the workload, transfer the workload to one or more components of the graphics processing apparatus for execution, and upon execution of the command associated with the trigger, sample the shared buffer to identify a new workload added to the shared buffer. The one or more components of the graphics processing apparatus automatically execute the new workload added to the shared buffer.
摘要:
An apparatus and method for scalable interrupt reporting. For example, one embodiment of an apparatus comprises: a host processor to execute one or more processes having a corresponding one or more process contexts associated therewith; and a graphics processing engine to, upon initiating execution of a first process, determine a current process context associated with the first process including a first pointer to a first system memory region to store an interrupt status, a second pointer to a second system memory region to store interrupt enable and/or interrupt mask data for one or more interrupt events, and address/data values associated with a message signaled interrupt (MSI); the graphics processing engine, in response to an interrupt event, to evaluate the interrupt enable data from the second system memory region to determine whether the interrupt event is enabled, to report the interrupt event, if enabled, by writing a specified value to the first system memory region identified by the first pointer, and to generate a first MSI corresponding to the interrupt event by writing the MSI address/data values to an output accessible by the host processor.
摘要:
Mid-command buffer preemption is described for graphics workloads in a graphics processing environment. In one example, instructions of a first context are executed at a graphics processor, the first context has a sequence of instructions in an addressable buffer and at least one of the instructions is a preemption instruction. Upon executing the preemption instruction, execution of the first context is stopped before the sequence of instructions is completed. An address is stored for an instruction with which the first context will be resumed. The second context is executed, and upon completion of the execution of the second context, the execution of the first context is resumed at the stored address.
摘要:
A method, device, and system to distribute code and data stores between volatile and non-volatile memory are described. In one embodiment, the method includes storing one or more static code segments of a software application in a phase change memory with switch (PCMS) device, storing one or more static data segments of the software application in the PCMS device, and storing one or more volatile data segments of the software application in a volatile memory device. The method then allocates an address mapping table with at least a first address pointer to point to each of the one or more static code segments, at least a second address pointer to point to each of the one or more static data segments, and at least a third address pointer to point to each of the one or more volatile data segments.
摘要:
Examples are disclosed for adjusting a performance state of a graphics subsystem and/or a processor based on a comparison of an average frame rate to a target frame rate and also based on whether the graphics subsystem is in a burst mode or sustained mode of operation.
摘要:
Techniques are disclosed for processing rendering engine workload of a graphics system in a secure fashion, wherein at least some security processing of the workload is offloaded from software-based security parsing to hardware-based security parsing. In some embodiments, commands from a given application are received by a user mode driver (UMD), which is configured to generate a command buffer delineated into privileged and/or non-privileged command sections. The delineated command buffer can then be passed by the UMD to a kernel-mode driver (KMD), which is configured to parse and validate only privileged buffer sections, but to issue all other batch buffers with a privilege indicator set to non-privileged. A graphics processing unit (GPU) can receive the privilege-designated batch buffers from the KMD, and is configured to disallow execution of any privileged command from a non-privileged batch buffer, while any privileged commands from privileged batch buffers are unrestricted by the GPU
摘要:
In accordance with some embodiments, a graphics process frame generation frame rate may be monitored in combination with a utilization or work load metric for the graphics process in order to allocate performance resources to the graphics process and in some cases, between the graphics process and a central processing unit.
摘要:
A method and apparatus for supporting programmable software context state execution during hardware context restore flow is described. In one example, a context ID is assigned to graphics applications including a unique context memory buffer, a unique indirect context pointer and a corresponding size to each context ID, an indirect context offset, and an indirect context buffer address range. When execution of the first context workload is indirected, the state of the first context workload is saved to the assigned context memory buffer. The indirect context pointer, the indirect context offset and a size of the indirect context buffer address range are saved to registers that are independent of the saved context state. The context is restored by accessing the saved indirect context pointer, the indirect context offset and the buffer size.
摘要:
Methods and devices to augment volatile memory in a graphics subsystem with certain types of non-volatile memory are described. In one embodiment, includes storing one or more static or near-static graphics resources in a non-volatile random access memory (NVRAM). The NVRAM is directly accessible by a graphics processor using at least memory store and load commands. The method also includes a graphics processor executing a graphics application. The graphics processor sends a request using a memory load command for an address corresponding to at least one static or near-static graphics resources stored in the NVRAM. The method also includes directly loading the requested graphics resource from the NVRAM into a cache for the graphics processor in response to the memory load command.
摘要:
A computing device for performing scheduling operations for graphics hardware is described herein. The computing device includes a central processing unit (CPU) that is configured to execute an application. The computing device also includes a graphics scheduler configured to operate independently of the CPU. The graphics scheduler is configured to receive work queues relating to workloads from the application that are to execute on the CPU and perform scheduling operations for any of a number of graphics engines based on the work queues.