Multilevel scheme for dynamically and statically predicting instruction resource utilization to generate execution cluster partitions
    11.
    发明授权
    Multilevel scheme for dynamically and statically predicting instruction resource utilization to generate execution cluster partitions 有权
    用于动态和静态预测指令资源利用率以生成执行集群分区的多级方案

    公开(公告)号:US07562206B2

    公开(公告)日:2009-07-14

    申请号:US11323043

    申请日:2005-12-30

    Abstract: Microarchitecture policies and structures to predict execution clusters and facilitate inter-cluster communication are disclosed. In disclosed embodiments, sequentially ordered instructions are decoded into micro-operations. Execution of one set of micro-operations is predicted to involve execution resources to perform memory access operations and inter-cluster communication, but not to perform branching operations. Execution of a second set of micro-operations is predicted to involve execution resources to perform branching operations but not to perform memory access operations. The micro-operations are partitioned for execution in accordance with these predictions, the first set of micro-operations to a first cluster of execution resources and the second set of micro-operations to a second cluster of execution resources. The first and second sets of micro-operations are executed out of sequential order and are retired to represent their sequential instruction ordering.

    Abstract translation: 公开了用于预测执行群集并促进群集间通信的微架构策略和结构。 在所公开的实施例中,顺序排序的指令被解码成微操作。 预计执行一组微操作涉及执行资源以执行存储器访问操作和集群间通信,但不执行分支操作。 预计第二组微操作的执行涉及执行资源以执行分支操作,但不执行存储器访问操作。 根据这些预测将微操作划分为执行,即第一组执行资源的第一组微操作和第二组执行资源的第二组微操作。 第一组和第二组微操作按顺序执行,并退出以表示其顺序指令排序。

    Method and apparatus for efficient utilization for prescient instruction prefetch
    12.
    发明授权
    Method and apparatus for efficient utilization for prescient instruction prefetch 有权
    有效利用预编程指令预取的方法和装置

    公开(公告)号:US07404067B2

    公开(公告)日:2008-07-22

    申请号:US10658072

    申请日:2003-09-08

    Abstract: Embodiments of an apparatus, system and method enhance the efficiency of processor resource utilization during instruction prefetching via one or more speculative threads. Renamer logic and a map table are utilized to perform filtering of instructions in a speculative thread instruction stream. The map table includes a yes-a-thing bit to indicate whether the associated physical register's content reflects the value that would be computed by the main thread. A thread progress beacon table is utilized to track relative progress of a main thread and a speculative helper thread. Based upon information in the thread progress beacon table, the main thread may effect termination of a helper thread that is not likely to provide a performance benefit for the main thread.

    Abstract translation: 装置,系统和方法的实施例通过一个或多个推测性线程增强在指令预取期间处理器资源利用的效率。 利用重命名逻辑和映射表来对推测性线程指令流中的指令进行滤波。 映射表包括一个肯定事件位,用于指示相关联的物理寄存器的内容是否反映由主线程计算的值。 线程进度信标表用于跟踪主线程和推测式辅助线程的相对进度。 基于线程进度信标表中的信息,主线程可能会影响不太可能为主线程提供性能优势的辅助线程的终止。

    Decoupling the number of logical threads from the number of simultaneous physical threads in a processor
    18.
    发明申请
    Decoupling the number of logical threads from the number of simultaneous physical threads in a processor 有权
    从处理器中同时处理的物理线程的数量解耦逻辑线程数

    公开(公告)号:US20050193278A1

    公开(公告)日:2005-09-01

    申请号:US10745527

    申请日:2003-12-29

    CPC classification number: G06F9/485 G06F9/3851

    Abstract: Systems and methods of managing threads provide for supporting a plurality of logical threads with a plurality of simultaneous physical threads in which the number of logical threads may be greater than or less than the number of physical threads. In one approach, each of the plurality of logical threads is maintained in one of a wait state, an active state, a drain state, and a stall state. A state machine and hardware sequencer can be used to transition the logical threads between states based on triggering events and whether or not an interruptible point has been encountered in the logical threads. The logical threads are scheduled on the physical threads to meet, for example, priority, performance or fairness goals. It is also possible to specify the resources that are available to each logical thread in order to meet these and other, goals. In one example, a single logical thread can speculatively use more than one physical thread, pending a selection of which physical thread should be committed.

    Abstract translation: 管理线程的系统和方法提供支持具有多个同时物理线程的多个逻辑线程,其中逻辑线程的数量可以大于或小于物理线程的数量。 在一种方法中,多个逻辑线程中的每一个维持在等待状态,活动状态,排出状态和失速状态之一。 可以使用状态机和硬件定序器来基于触发事件来转换状态之间的逻辑线程,以及是否在逻辑线程中遇到可中断点。 逻辑线程被安排在物理线程上以满足例如优先级,性能或公平性目标。 也可以指定每个逻辑线程可用的资源,以满足这些目标和其他目标。 在一个示例中,单个逻辑线程可以推测使用多个物理线程,等待选择要提交哪个物理线程。

    Method and system for memory renaming
    19.
    发明申请
    Method and system for memory renaming 有权
    用于内存重命名的方法和系统

    公开(公告)号:US20050149702A1

    公开(公告)日:2005-07-07

    申请号:US10745700

    申请日:2003-12-29

    Abstract: Embodiments of the present invention provide a method, apparatus and system for memory renaming. In one embodiment, a decode unit may decode a load instruction. If the load instruction is predicted to be memory renamed, the load instruction may have a predicted store identifier associated with the load instruction. The decode unit may transform the load instruction that is predicted to be memory renamed into a data move instruction and a load check instruction. The data move instruction may read data from the cache based on the predicted store identifier and load check instruction may compare an identifier associated with an identified source store with the predicted store identifier. A retirement unit may retire the load instruction if the predicted store identifier matches an identifier associated with the identified source store. In another embodiment of the present invention, the processor may re-execute the load instruction without memory renaming if the predicted store identifier does not match the identifier associated with the identified source store.

    Abstract translation: 本发明的实施例提供了一种用于存储器重命名的方法,装置和系统。 在一个实施例中,解码单元可以解码加载指令。 如果加载指令被预测为存储器重新命名,则加载指令可以具有与加载指令相关联的预测存储标识符。 解码单元可以将预测为被重命名的存储器的加载指令变换为数据移动指令和加载检查指令。 数据移动指令可以基于预测的存储标识符从高速缓存读取数据,并且加载检查指令可以将与所识别的源存储器相关联的标识符与预测的存储标识符进行比较。 如果预测的商店标识符与与所标识的源商店相关联的标识符匹配,则退休单元可以退出加载指令。 在本发明的另一个实施例中,如果预测的存储标识符与与所识别的源存储器相关联的标识符不匹配,则处理器可以重新执行加载指令而不进行存储器重命名。

    Decoupling request for ownership tag reads from data read operations
    20.
    发明申请
    Decoupling request for ownership tag reads from data read operations 有权
    所有权标签的解耦请求从数据读取操作读取

    公开(公告)号:US20050144398A1

    公开(公告)日:2005-06-30

    申请号:US10747145

    申请日:2003-12-30

    Abstract: Embodiments of the present invention relate to cache coherency. In an embodiment of the invention, a cache includes one or more cache lines. A store pipeline may retrieve a tag associated with one of the cache lines. The data associated with the cache line may not retrieved and the cache line may be updated if, based on the tag, the cache line is determined to be in a modified or exclusive state.

    Abstract translation: 本发明的实施例涉及高速缓存一致性。 在本发明的实施例中,高速缓存包括一个或多个高速缓存行。 存储流水线可以检索与一个缓存行相关联的标签。 如果基于标签将高速缓存行确定为处于修改或排除状态,则可能无法检索与高速缓存行相关联的数据,并且可以更新高速缓存行。

Patent Agency Ranking