Mechanism in a multi-threaded microprocessor to maintain best case demand instruction redispatch
    1.
    发明授权
    Mechanism in a multi-threaded microprocessor to maintain best case demand instruction redispatch 有权
    机制在多线程微处理器中维护最佳情况需求指令重新分配

    公开(公告)号:US07571283B2

    公开(公告)日:2009-08-04

    申请号:US12113561

    申请日:2008-05-01

    IPC分类号: G06F12/08

    摘要: A method and system for maintaining a best-case demand redispatch of an instruction to allow for maximizing the time a rejected thread may execute in lookahead execution mode, while maintaining the smallest L1 cache miss penalty supported by the memory subsystem. In response to a demand miss, a load/store unit sends a fetch request to the next level cache. The cache line of the demand miss is examined to identify the critical sector. Once the critical sector is identified, a best-case data return time is determined based on the fastest time the next level cache is able to return the critical sector of the cache line. The load/store unit then sends a speculative warning to the dispatch unit to coincide with the best-case data return, wherein the speculative warning prepares the dispatch unit to resend the instruction for execution as soon as data is available to the processor core.

    摘要翻译: 一种方法和系统,用于维持指令的最佳情况需求重新分配,以允许最大化被拒绝的线程可以在前瞻执行模式中执行的时间,同时保持由存储器子系统支持的最小的L1高速缓存未命中。 响应于需求未命中,加载/存储单元向下一级高速缓存发送提取请求。 检查需求缺失的高速缓存行以确定关键部门。 一旦确定了关键扇区,则最佳情况下的数据返回时间是基于下一级高速缓存能够返回高速缓存线的关键扇区的最快时间来确定的。 加载/存储单元然后向调度单元发送与最佳情况数据返回一致的推测警告,其中,一旦数据可用于处理器核心,推测警告就准备调度单元重新发送执行指令。

    Atomic quad word storage in a simultaneous multithreaded system
    2.
    发明授权
    Atomic quad word storage in a simultaneous multithreaded system 失效
    原子四字存储在同时多线程系统中

    公开(公告)号:US06981128B2

    公开(公告)日:2005-12-27

    申请号:US10422664

    申请日:2003-04-24

    摘要: In a system with multiple execution units, instructions are queued to allow efficient dispatching. One load/store unit (LSU) may have a store instruction pending to a real address and a second LSU may have a load instruction pending to the same real address. An SMT system has an atomic store quad word (SQW) instruction with a data path that is only double wide and the SQW requires two cycles to complete. The SMT system requires a method to prevent between collisions in a store reorder queue (SRQ) STQ. The real address of a load word (LW) one thread is compared to the real addresses in the SRQ of the second thread. If the SQW with a real address matching the real address of the LW has not committed both of its double words, then the LW of the second thread is rejected.

    摘要翻译: 在具有多个执行单元的系统中,排队等待指令进行高效的调度。 一个加载/存储单元(LSU)可以具有等待到实际地址的存储指令,并且第二LSU可以具有等待到同一实际地址的加载指令。 SMT系统具有原子存储四字(SQW)指令,数据路径仅为双倍宽,SQW需要两个周期才能完成。 SMT系统需要一种方法来防止存储重新排序队列(SRQ)STQ中的冲突。 一个线程的加载字(LW)的真实地址与第二个线程的SRQ中的实际地址进行比较。 如果具有与LW的实际地址匹配的实际地址的SQW没有提交两个双字,则第二个线程的LW被拒绝。

    Loading data to vector renamed register from across multiple cache lines
    3.
    发明授权
    Loading data to vector renamed register from across multiple cache lines 有权
    从多个缓存行将数据加载到向量重命名的寄存器

    公开(公告)号:US08086801B2

    公开(公告)日:2011-12-27

    申请号:US12420118

    申请日:2009-04-08

    IPC分类号: G06F13/00

    摘要: A load instruction that accesses data cache may be off natural alignment, which causes a cache line crossing to complete the access. The illustrative embodiments provide a mechanism for loading data across multiple cache lines without the need for an accumulation register or collection point for partial data access from a first cache line while waiting for a second cache line to be accessed. Because the accesses to separate cache lines are concatenated within the vector rename register without the need for an accumulator, an off-alignment load instruction is completely pipeline-able and flushable with no cleanup consequences.

    摘要翻译: 访问数据高速缓存的加载指令可能是自然对齐,这导致高速缓存行交叉以完成访问。 说明性实施例提供了用于在多个高速缓存线之间加载数据的机制,而不需要在等待第二高速缓存行被访问的同时从第一高速缓存行部分数据存取的累加寄存器或收集点。 因为对分离的高速缓存行的访问在矢量重命名寄存器中连接,而不需要累加器,所以非对齐加载指令是完全可管理的并且可以刷新而没有清除后果。

    Loading Data to Vector Renamed Register From Across Multiple Cache Lines
    4.
    发明申请
    Loading Data to Vector Renamed Register From Across Multiple Cache Lines 有权
    将数据加载到多个缓存行中向量重命名的注册表

    公开(公告)号:US20100262781A1

    公开(公告)日:2010-10-14

    申请号:US12420118

    申请日:2009-04-08

    IPC分类号: G06F12/08

    摘要: A load instruction that accesses data cache may be off natural alignment, which causes a cache line crossing to complete the access. The illustrative embodiments provide a mechanism for loading data across multiple cache lines without the need for an accumulation register or collection point for partial data access from a first cache line while waiting for a second cache line to be accessed. Because the accesses to separate cache lines are concatenated within the vector rename register without the need for an accumulator, an off-alignment load instruction is completely pipeline-able and flushable with no cleanup consequences.

    摘要翻译: 访问数据高速缓存的加载指令可能是自然对齐,这导致高速缓存行交叉以完成访问。 说明性实施例提供了用于在多个高速缓存线之间加载数据的机制,而不需要在等待第二高速缓存行被访问的同时从第一高速缓存行部分数据存取的累加寄存器或收集点。 因为对分离的高速缓存行的访问在矢量重命名寄存器中连接,而不需要累加器,所以非对齐加载指令是完全可管理的并且可以刷新而没有清除后果。