Managing Multiple Threads In A Single Pipeline
    11.
    发明申请
    Managing Multiple Threads In A Single Pipeline 有权
    在单个管道中管理多个线程

    公开(公告)号:US20130013898A1

    公开(公告)日:2013-01-10

    申请号:US13613820

    申请日:2012-09-13

    IPC分类号: G06F9/38

    CPC分类号: G06F9/3851

    摘要: In one embodiment, the present invention includes a method for determining if an instruction of a first thread dispatched from a first queue associated with the first thread is stalled in a pipestage of a pipeline, and if so, dispatching an instruction of a second thread from a second queue associated with the second thread to the pipeline if the second thread is not stalled. Other embodiments are described and claimed.

    摘要翻译: 在一个实施例中,本发明包括一种用于确定从与第一线程相关联的第一队列调度的第一线程的指令是否在流水线的分支中被停止的方法,如果是,则将第二线程的指令从 如果第二线程没有停止,则与第二线程相关联的第二队列到管道。 描述和要求保护其他实施例。

    Flow optimization and prediction for VSSE memory operations
    16.
    发明申请
    Flow optimization and prediction for VSSE memory operations 有权
    VSSE存储器操作的流优化和预测

    公开(公告)号:US20070143575A1

    公开(公告)日:2007-06-21

    申请号:US11315964

    申请日:2005-12-21

    IPC分类号: G06F15/00

    摘要: In one embodiment, a method for flow optimization and prediction for vector streaming single instruction, multiple data (SIMD) extension (VSSE) memory operations is disclosed. The method comprises generating an optimized micro-operation (μop) flow for an instruction to operate on a vector if the instruction is predicted to be unmasked and unit-stride, the instruction to access elements in memory, and accessing via the optimized μop flow two or more of the elements at the same time without determining masks of the two or more elements. Other embodiments are also described.

    摘要翻译: 在一个实施例中,公开了一种用于向量流单个指令,多数据(SIMD)扩展(VSSE)存储器操作的流优化和预测的方法。 该方法包括:如果预测指令是未屏蔽和单步的,则生成用于对矢量进行操作的指令的优化的微操作(muop)流程,访问存储器中的元件的指令以及经由优化的muop流2访问 或更多的元素,而不确定两个或更多个元件的掩模。 还描述了其它实施例。

    Staggered execution stack for vector processing
    17.
    发明申请
    Staggered execution stack for vector processing 有权
    用于矢量处理的交错执行堆栈

    公开(公告)号:US20070079179A1

    公开(公告)日:2007-04-05

    申请号:US11240982

    申请日:2005-09-30

    IPC分类号: G06F11/00

    摘要: In one embodiment, the present invention includes a method for executing an operation on low order portions of first and second source operands using a first execution stack of a processor and executing the operation on high order portions of the first and second source operands using a second execution stack of the processor, where the operation in the second execution stack is staggered by one or more cycles from the operation in the first execution stack. Other embodiments are described and claimed.

    摘要翻译: 在一个实施例中,本发明包括一种使用处理器的第一执行堆栈来执行第一和第二源操作数的低阶部分的操作的方法,并且使用第二和第二源操作数对第一和第二源操作数的高阶部分执行操作 处理器的执行堆栈,其中第二执行堆栈中的操作与第一执行堆栈中的操作交错一个或多个周期。 描述和要求保护其他实施例。

    Wakeup mechanisms for schedulers
    18.
    发明申请
    Wakeup mechanisms for schedulers 审中-公开
    调度器唤醒机制

    公开(公告)号:US20070043932A1

    公开(公告)日:2007-02-22

    申请号:US11208916

    申请日:2005-08-22

    IPC分类号: G06F9/30

    摘要: Methods and apparatus to provide wakeup mechanisms for schedulers are described. In one embodiment, a scheduler broadcasts a uop scheduler identifier of a scheduled uop (or micro-operation) to one or more uops awaiting scheduling. The scheduler may further update one or more corresponding entries in a uop dependency matrix or a uop source identifiers and data buffer.

    摘要翻译: 描述了为调度器提供唤醒机制的方法和装置。 在一个实施例中,调度器将调度的uop(或微操作)的uop调度器标识符广播到等待调度的一个或多个uop。 调度器可以进一步更新uop依赖矩阵或uop源标识符和数据缓冲器中的一个或多个相应条目。

    Multilevel scheme for dynamically and statically predicting instruction resource utilization to generate execution cluster partitions
    20.
    发明授权
    Multilevel scheme for dynamically and statically predicting instruction resource utilization to generate execution cluster partitions 有权
    用于动态和静态预测指令资源利用率以生成执行集群分区的多级方案

    公开(公告)号:US07562206B2

    公开(公告)日:2009-07-14

    申请号:US11323043

    申请日:2005-12-30

    IPC分类号: G06F9/30

    摘要: Microarchitecture policies and structures to predict execution clusters and facilitate inter-cluster communication are disclosed. In disclosed embodiments, sequentially ordered instructions are decoded into micro-operations. Execution of one set of micro-operations is predicted to involve execution resources to perform memory access operations and inter-cluster communication, but not to perform branching operations. Execution of a second set of micro-operations is predicted to involve execution resources to perform branching operations but not to perform memory access operations. The micro-operations are partitioned for execution in accordance with these predictions, the first set of micro-operations to a first cluster of execution resources and the second set of micro-operations to a second cluster of execution resources. The first and second sets of micro-operations are executed out of sequential order and are retired to represent their sequential instruction ordering.

    摘要翻译: 公开了用于预测执行群集并促进群集间通信的微架构策略和结构。 在所公开的实施例中,顺序排序的指令被解码成微操作。 预计执行一组微操作涉及执行资源以执行存储器访问操作和集群间通信,但不执行分支操作。 预计第二组微操作的执行涉及执行资源以执行分支操作,但不执行存储器访问操作。 根据这些预测将微操作划分为执行,即第一组执行资源的第一组微操作和第二组执行资源的第二组微操作。 第一组和第二组微操作按顺序执行,并退出以表示其顺序指令排序。