摘要:
Embodiments described herein include a graphics processing unit. The graphics processing unit includes a plurality of execution units. The graphics processing unit also includes a plurality of sampler units. Each sampler unit corresponds to a sampler dispatch logic unit and at least one execution unit, and the sampler dispatch logic units are used to network the plurality of sampler units.
摘要:
A method and apparatus for supporting programmable software context state execution during hardware context restore flow is described. In one example, a context ID is assigned to graphics applications including a unique context memory buffer, a unique indirect context pointer and a corresponding size to each context ID, an indirect context offset, and an indirect context buffer address range. When execution of the first context workload is indirected, the state of the first context workload is saved to the assigned context memory buffer. The indirect context pointer, the indirect context offset and a size of the indirect context buffer address range are saved to registers that are independent of the saved context state. The context is restored by accessing the saved indirect context pointer, the indirect context offset and the buffer size.
摘要:
A method and apparatus for supporting programmable software context state execution during hardware context restore flow is described. In one example, a context ID is assigned to graphics applications including a unique context memory buffer, a unique indirect context pointer and a corresponding size to each context ID, an indirect context offset, and an indirect context buffer address range. When execution of the first context workload is indirected, the state of the first context workload is saved to the assigned context memory buffer. The indirect context pointer, the indirect context offset and a size of the indirect context buffer address range are saved to registers that are independent of the saved context state. The context is restored by accessing the saved indirect context pointer, the indirect context offset and the buffer size.
摘要:
Mid-command buffer preemption is described for graphics workloads in a graphics processing environment. In one example, instructions of a first context are executed at a graphics processor, the first context has a sequence of instructions in an addressable buffer and at least one of the instructions is a preemption instruction. Upon executing the preemption instruction, execution of the first context is stopped before the sequence of instructions is completed. An address is stored for an instruction with which the first context will be resumed. The second context is executed, and upon completion of the execution of the second context, the execution of the first context is resumed at the stored address.
摘要:
Memory-based semaphore are described that are useful for synchronizing operations between different processing engines. In one example, operations include executing a context at a producer engine, the executing including updating a memory register, and sending a signal from the producer engine to a consumer engine that the memory register has been updated, the signal including a Context ID to identify a context to be executed by the consumer engine to update the register.
摘要:
Mid-command buffer preemption is described for graphics workloads in a graphics processing environment. In one example, instructions of a first context are executed at a graphics processor, the first context has a sequence of instructions in an addressable buffer and at least one of the instructions is a preemption instruction. Upon executing the preemption instruction, execution of the first context is stopped before the sequence of instructions is completed. An address is stored for an instruction with which the first context will be resumed. The second context is executed, and upon completion of the execution of the second context, the execution of the first context is resumed at the stored address.
摘要:
Memory-based semaphore are described that are useful for synchronizing operations between different processing engines. In one example, operations include executing a context at a producer engine, the executing including updating a memory register, and sending a signal from the producer engine to a consumer engine that the memory register has been updated, the signal including a Context ID to identify a context to be executed by the consumer engine to update the register.