COMPUTE THREAD ARRAY GRANULARITY EXECUTION PREEMPTION
    2.
    发明申请
    COMPUTE THREAD ARRAY GRANULARITY EXECUTION PREEMPTION 审中-公开
    计算机螺旋桨阵列精度执行预警

    公开(公告)号:US20130132711A1

    公开(公告)日:2013-05-23

    申请号:US13302962

    申请日:2011-11-22

    IPC分类号: G06F9/38

    CPC分类号: G06F9/461

    摘要: One embodiment of the present invention sets forth a technique instruction level and compute thread array granularity execution preemption. Preempting at the instruction level does not require any draining of the processing pipeline. No new instructions are issued and the context state is unloaded from the processing pipeline. When preemption is performed at a compute thread array boundary, the amount of context state to be stored is reduced because execution units within the processing pipeline complete execution of in-flight instructions and become idle. If, the amount of time needed to complete execution of the in-flight instructions exceeds a threshold, then the preemption may dynamically change to be performed at the instruction level instead of at compute thread array granularity.

    摘要翻译: 本发明的一个实施例阐述了技术指令级别和计算线程数组粒度执行抢占。 在指令级别抢占不需要处理管道的任何排水。 不会发出新的指令,并且从处理流水线中卸载上下文状态。 当在计算线程数组边界执行抢占时,由于处理流程内的执行单元完成飞行中指令的执行并变为空闲,因此减少了要存储的上下文状态量。 如果完成执行飞行中指令所需的时间超过阈值,则抢占可以动态地改变以在指令级别而不是以计算线程数组粒度来执行。

    N-way memory barrier operation coalescing
    4.
    发明授权
    N-way memory barrier operation coalescing 有权
    N路记忆障碍操作合并

    公开(公告)号:US08997103B2

    公开(公告)日:2015-03-31

    申请号:US13441785

    申请日:2012-04-06

    摘要: One embodiment sets forth a technique for N-way memory barrier operation coalescing. When a first memory barrier is received for a first thread group execution of subsequent memory operations for the first thread group are suspended until the first memory barrier is executed. Subsequent memory barriers for different thread groups may be coalesced with the first memory barrier to produce a coalesced memory barrier that represents memory barrier operations for multiple thread groups. When the coalesced memory barrier is being processed, execution of subsequent memory operations for the different thread groups is also suspended. However, memory operations for other thread groups that are not affected by the coalesced memory barrier may be executed.

    摘要翻译: 一个实施例提出了一种用于N路存储器屏障操作合并的技术。 当为第一线程组接收到第一存储器障碍时,对于第一线程组的后续存储器操作的执行被暂停,直到执行第一存储器障碍。 不同线程组的后续内存障碍可以与第一存储器屏障合并,以产生代表多个线程组的存储器屏障操作的聚结存储器屏障。 当合并的存储器障碍被处理时,对于不同的线程组的后续存储器操作的执行也被暂停。 然而,可以执行不受聚结的存储器屏障影响的其他线程组的存储器操作。

    PRE-SCHEDULED REPLAYS OF DIVERGENT OPERATIONS
    6.
    发明申请
    PRE-SCHEDULED REPLAYS OF DIVERGENT OPERATIONS 审中-公开
    预先安排的重复操作

    公开(公告)号:US20130212364A1

    公开(公告)日:2013-08-15

    申请号:US13370173

    申请日:2012-02-09

    IPC分类号: G06F9/38 G06F9/312

    摘要: One embodiment of the present disclosure sets forth an optimized way to execute pre-scheduled replay operations for divergent operations in a parallel processing subsystem. Specifically, a streaming multiprocessor (SM) includes a multi-stage pipeline configured to insert pre-scheduled replay operations into a multi-stage pipeline. A pre-scheduled replay unit detects whether the operation associated with the current instruction is accessing a common resource. If the threads are accessing data which are distributed across multiple cache lines, then the pre-scheduled replay unit inserts pre-scheduled replay operations behind the current instruction. The multi-stage pipeline executes the instruction and the associated pre-scheduled replay operations sequentially. If additional threads remain unserviced after execution of the instruction and the pre-scheduled replay operations, then additional replay operations are inserted via the replay loop, until all threads are serviced. One advantage of the disclosed technique is that divergent operations requiring one or more replay operations execute with reduced latency.

    摘要翻译: 本公开的一个实施例阐述了在并行处理子系统中执行用于发散操作的预先安排的重播操作的优化方式。 具体地,流式多处理器(SM)包括多级流水线,其被配置为将预先安排的重播操作插入到多级流水线中。 预先安排的重播单元检测与当前指令相关联的操作是否正在访问公共资源。 如果线程正在访问分布在多个高速缓存线上的数据,则预先安排的重播单元在当前指令后面插入预先安排的重放操作。 多级流水线顺序执行指令和相关的预先安排的重播操作。 如果附加线程在执行指令和预先安排的重放操作之后保持未被接受,则通过重放循环插入附加的重放操作,直到对所有线程进行服务。 所公开技术的一个优点是需要一个或多个重放操作的发散操作以较低的等待时间执行。

    Memory controller providing dynamic arbitration of memory commands
    9.
    发明授权
    Memory controller providing dynamic arbitration of memory commands 失效
    存储器控制器提供存储器命令的动态仲裁

    公开(公告)号:US06922770B2

    公开(公告)日:2005-07-26

    申请号:US10446333

    申请日:2003-05-27

    IPC分类号: G06F12/00 G06F12/10 G06F13/16

    CPC分类号: G06F13/1621 G06F2213/0038

    摘要: Embodiments of the present invention provide a memory controller comprising a front-end module, a back-end module communicatively coupled to the front-end module, and a physical interface module communicatively coupled to the back-end module. The front-end module generates a plurality of page packets from a plurality of received memory commands, wherein the order of receipt of said memory commands is preserved. The back-end module dynamically issues a next one of the plurality of page packets while issuing a current one of the plurality of page packets. The physical interface module causes a plurality of transfers according to the dynamically issued current one and next one of the plurality of page packets.

    摘要翻译: 本发明的实施例提供了一种存储器控制器,其包括前端模块,通信地耦合到前端模块的后端模块以及通信地耦合到后端模块的物理接口模块。 前端模块从多个接收到的存储器命令生成多个页面包,其中保存所述存储器命令的接收顺序。 后端模块在发布多个页面分组中的当前页面分组的同时动态地发出多个页面分组中的下一个分组。 物理接口模块根据多个页面分组中的动态发布的当前一个和下一个页面进行多个传输。