Method and apparatus for supporting programmable software context state execution during hardware context restore flow
    1.
    发明授权
    Method and apparatus for supporting programmable software context state execution during hardware context restore flow 有权
    用于在硬件上下文恢复流程期间支持可编程软件上下文状态执行的方法和装置

    公开(公告)号:US09563466B2

    公开(公告)日:2017-02-07

    申请号:US14072622

    申请日:2013-11-05

    IPC分类号: G06F9/46 G06T1/20

    CPC分类号: G06F9/461 G06T1/20

    摘要: A method and apparatus for supporting programmable software context state execution during hardware context restore flow is described. In one example, a context ID is assigned to graphics applications including a unique context memory buffer, a unique indirect context pointer and a corresponding size to each context ID, an indirect context offset, and an indirect context buffer address range. When execution of the first context workload is indirected, the state of the first context workload is saved to the assigned context memory buffer. The indirect context pointer, the indirect context offset and a size of the indirect context buffer address range are saved to registers that are independent of the saved context state. The context is restored by accessing the saved indirect context pointer, the indirect context offset and the buffer size.

    摘要翻译: 描述了用于在硬件上下文恢复流程期间支持可编程软件上下文状态执行的方法和装置。 在一个示例中,上下文ID被分配给包括唯一上下文存储器缓冲器,唯一间接上下文指针和对每个上下文ID,间接上下文偏移以及间接上下文缓冲器地址范围的对应大小的图形应用。 当第一上下文工作负载的执行被间接时,第一上下文工作负载的状态被保存到所分配的上下文存储器缓冲器中。 间接上下文指针,间接上下文偏移量和间接上下文缓冲区地址范围的大小保存到独立于保存的上下文状态的寄存器中。 通过访问保存的间接上下文指针,间接上下文偏移量和缓冲区大小来恢复上下文。

    MID COMMAND BUFFER PREEMPTION FOR GRAPHICS WORKLOADS
    2.
    发明申请
    MID COMMAND BUFFER PREEMPTION FOR GRAPHICS WORKLOADS 有权
    图形工作负载的MID命令缓冲区预处理

    公开(公告)号:US20150002522A1

    公开(公告)日:2015-01-01

    申请号:US13931915

    申请日:2013-06-29

    IPC分类号: G06T1/20

    摘要: Mid-command buffer preemption is described for graphics workloads in a graphics processing environment. In one example, instructions of a first context are executed at a graphics processor, the first context has a sequence of instructions in an addressable buffer and at least one of the instructions is a preemption instruction. Upon executing the preemption instruction, execution of the first context is stopped before the sequence of instructions is completed. An address is stored for an instruction with which the first context will be resumed. The second context is executed, and upon completion of the execution of the second context, the execution of the first context is resumed at the stored address.

    摘要翻译: 描述图形处理环境中图形工作负载的中间命令缓冲区抢占。 在一个示例中,在图形处理器处执行第一上下文的指令,第一上下文在可寻址缓冲器中具有指令序列,并且指令中的至少一个是抢占指令。 在执行抢占指令之后,在指令序列完成之前停止第一上下文的执行。 存储用于第一上下文将被恢复的指令的地址。 执行第二上下文,并且在完成第二上下文的执行时,在存储的地址处恢复执行第一上下文。

    MEMORY BASED SEMAPHORES
    3.
    发明申请
    MEMORY BASED SEMAPHORES 有权
    基于记忆的SEMAPHORES

    公开(公告)号:US20140160138A1

    公开(公告)日:2014-06-12

    申请号:US13707930

    申请日:2012-12-07

    IPC分类号: G09G5/00

    摘要: Memory-based semaphore are described that are useful for synchronizing operations between different processing engines. In one example, operations include executing a context at a producer engine, the executing including updating a memory register, and sending a signal from the producer engine to a consumer engine that the memory register has been updated, the signal including a Context ID to identify a context to be executed by the consumer engine to update the register.

    摘要翻译: 描述了基于内存的信号量,其用于在不同处理引擎之间同步操作。 在一个示例中,操作包括在生成器引擎处执行上下文,执行包括更新存储器寄存器,以及将生成器引擎的信号发送到已更新存储器寄存器的消费者引擎,该信号包括识别的上下文ID 由消费者引擎执行以更新寄存器的上下文。

    METHOD AND APPARATUS FOR SUPPORTING PROGRAMMABLE SOFTWARE CONTEXT STATE EXECUTION DURING HARDWARE CONTEXT RESTORE FLOW
    4.
    发明申请
    METHOD AND APPARATUS FOR SUPPORTING PROGRAMMABLE SOFTWARE CONTEXT STATE EXECUTION DURING HARDWARE CONTEXT RESTORE FLOW 有权
    在硬件上下文恢复流程期间支持可编程软件上下文执行的方法和装置

    公开(公告)号:US20150123980A1

    公开(公告)日:2015-05-07

    申请号:US14072622

    申请日:2013-11-05

    IPC分类号: G06T15/00 G06F9/46 G06T1/20

    CPC分类号: G06F9/461 G06T1/20

    摘要: A method and apparatus for supporting programmable software context state execution during hardware context restore flow is described. In one example, a context ID is assigned to graphics applications including a unique context memory buffer, a unique indirect context pointer and a corresponding size to each context ID, an indirect context offset, and an indirect context buffer address range. When execution of the first context workload is indirected, the state of the first context workload is saved to the assigned context memory buffer. The indirect context pointer, the indirect context offset and a size of the indirect context buffer address range are saved to registers that are independent of the saved context state. The context is restored by accessing the saved indirect context pointer, the indirect context offset and the buffer size.

    摘要翻译: 描述了用于在硬件上下文恢复流程期间支持可编程软件上下文状态执行的方法和装置。 在一个示例中,上下文ID被分配给包括唯一上下文存储器缓冲器,唯一间接上下文指针和对每个上下文ID,间接上下文偏移以及间接上下文缓冲器地址范围的对应大小的图形应用。 当第一上下文工作负载的执行被间接时,第一上下文工作负载的状态被保存到所分配的上下文存储器缓冲器中。 间接上下文指针,间接上下文偏移量和间接上下文缓冲区地址范围的大小保存到独立于保存的上下文状态的寄存器中。 通过访问保存的间接上下文指针,间接上下文偏移量和缓冲区大小来恢复上下文。

    METHOD AND APPARATUS FOR EFFICIENT LOOP PROCESSING IN A GRAPHICS HARDWARE FRONT END

    公开(公告)号:US20190066255A1

    公开(公告)日:2019-02-28

    申请号:US15690201

    申请日:2017-08-29

    IPC分类号: G06T1/20 G06T15/00

    摘要: Various embodiments enable loop processing in a command processing block of the graphics hardware. Such hardware may include a processor including a command buffer, and a graphics command parser. The graphics command parser to load graphics commands from the command buffer, parse a first graphics command, store a loop count value associated with the first graphics command, parse a second graphics command and store a loop wrap address based on the second graphics command. The graphics command parser may execute a command sequence identified by the second graphics command, parse a third graphics command, the third graphics command identifying an end of the command sequence, set a new loop count value, and iteratively execute the command sequence using the loop wrap address based on the new loop count value.

    DYNAMIC CONFIGURATION OF CACHES IN A MULTI-CONTEXT SUPPORTED GRAPHICS PROCESSOR

    公开(公告)号:US20190034326A1

    公开(公告)日:2019-01-31

    申请号:US15858704

    申请日:2017-12-29

    IPC分类号: G06F12/02 G06F12/12 G06F13/26

    摘要: Graphics processing systems and methods are described. For example, one embodiment of a graphics processing apparatus comprises a graphics processing unit (GPU), the GPU including an on-die cache and a cache configuration circuitry to dynamically configure the on-die cache for a plurality of contexts executed by the GPU. The cache configuration block is to receive a cache configuration request, the cache configuration request including context-specific cache requirements for a new context, and determine a priority associated with the context-specific cache requirements. The CCB can compare the context-specific cache requirements with pre-existing cache requirements based on the priority, and reallocate the cache based on the context-specific cache requirements and the priority.

    Memory based semaphores
    9.
    发明授权
    Memory based semaphores 有权
    基于内存的信号量

    公开(公告)号:US09064437B2

    公开(公告)日:2015-06-23

    申请号:US13707930

    申请日:2012-12-07

    摘要: Memory-based semaphore are described that are useful for synchronizing operations between different processing engines. In one example, operations include executing a context at a producer engine, the executing including updating a memory register, and sending a signal from the producer engine to a consumer engine that the memory register has been updated, the signal including a Context ID to identify a context to be executed by the consumer engine to update the register.

    摘要翻译: 描述了基于内存的信号量,其用于在不同处理引擎之间同步操作。 在一个示例中,操作包括在生成器引擎处执行上下文,执行包括更新存储器寄存器,以及将生成器引擎的信号发送到已更新存储器寄存器的消费者引擎,该信号包括识别的上下文ID 由消费者引擎执行以更新寄存器的上下文。