META PREDICTOR RESTORATION UPON DETECTING MISPREDICTION
    61.
    发明申请
    META PREDICTOR RESTORATION UPON DETECTING MISPREDICTION 有权
    META预测恢复检测故障

    公开(公告)号:US20130036297A1

    公开(公告)日:2013-02-07

    申请号:US13647153

    申请日:2012-10-08

    IPC分类号: G06F9/38

    CPC分类号: G06F9/3861 G06F9/3848

    摘要: Methods and apparatus for restoring a meta predictor system upon detecting a branch or binary misprediction, are disclosed. An example apparatus may include a base misprediction history register to store a set of misprediction history values each indicating whether a previous branch prediction taken by a previous branch instruction was predicted correctly or incorrectly. The apparatus may comprise a meta predictor to detect a branch misprediction of a current branch prediction based at least in part on an output of the base misprediction history register. The meta predictor may restore the base misprediction history register based on the detecting of the branch misprediction. Additional apparatus, systems, and methods are disclosed.

    摘要翻译: 公开了在检测到分支或二进制错误预测时恢复元预测系统的方法和装置。 一个示例性装置可以包括一个基本错误预测历史寄存器,用于存储一组错误预测历史值,每个错误预测历史值指示是否正确或不正确地预测了由先前的分支指令取得的先前分支预测。 该装置可以包括元预测器,用于至少部分地基于基本错误预测历史寄存器的输出来检测当前分支预测的分支错误预测。 元预测器可以基于检测到分支错误预测来恢复基本的错误预测历史寄存器。 公开了附加装置,系统和方法。

    EFFICIENT METHOD AND APPARATUS FOR EMPLOYING A MICRO-OP CACHE IN A PROCESSOR
    63.
    发明申请
    EFFICIENT METHOD AND APPARATUS FOR EMPLOYING A MICRO-OP CACHE IN A PROCESSOR 有权
    在处理器中使用微型高速缓存的有效方法和设备

    公开(公告)号:US20090249036A1

    公开(公告)日:2009-10-01

    申请号:US12060239

    申请日:2008-03-31

    IPC分类号: G06F9/30

    摘要: Methods and apparatus for using micro-op caches in processors are disclosed. A tag match for an instruction pointer retrieves a set of micro-op cache line access tuples having matching tags. The set is stored in a match queue. Line access tuples from the match queue are used to access cache lines in a micro-op cache data array to supply a micro-op queue. On a micro-op cache miss, a macroinstruction translation engine (MITE) decodes macroinstructions to supply the micro-op queue. Instruction pointers are stored in a miss queue for fetching macroinstructions from the MITE. The MITE may be disabled to conserve power when the miss queue is empty-likewise for the micro-op cache data array when the match queue is empty. Synchronization flags in the last micro-op from the micro-op cache on a subsequent micro-op cache miss indicate where micro-ops from the MITE merge with micro-ops from the micro-op cache.

    摘要翻译: 公开了在处理器中使用微操作高速缓存的方法和装置。 指令指针的标签匹配检索一组具有匹配标签的微操作高速缓存行访问元组。 该集合存储在匹配队列中。 来自匹配队列的线路访问元组用于访问微操作高速缓存数据阵列中的高速缓存行以提供微操作队列。 在微操作缓存未命中时,宏指令转换引擎(MITE)解码宏指令以提供微操作队列。 指令指针存储在从MITE获取宏指令的小队列中。 当缺席队列为空时,MITE可能会被禁用以节省电力,而当匹配队列为空时,也可以为微操作高速缓存数据阵列。 随后微操作高速缓存未命中的微操作高速缓存中的最后一个微操作中的同步标志指示来自MITE的微操作与微操作高速缓存的微操作合并。

    Method and apparatus for partitioned pipelined execution of multiple execution threads
    65.
    发明申请
    Method and apparatus for partitioned pipelined execution of multiple execution threads 有权
    分割流水线执行多个执行线程的方法和装置

    公开(公告)号:US20080005544A1

    公开(公告)日:2008-01-03

    申请号:US11479245

    申请日:2006-06-29

    IPC分类号: G06F9/00

    摘要: Methods and apparatus for partitioning a microprocessor pipeline to support pipelined branch prediction and instruction fetching of multiple execution threads. A thread selection stage selects a thread from a plurality of execution threads. In one embodiment, storage in a branch prediction output queue is pre-allocated to a portion of the thread in one branch prediction stage in order to prevent stalling of subsequent stages in the branch prediction pipeline. In another embodiment, an instruction fetch stage fetches instructions at a fetch address corresponding to a portion of the selected thread. Another instruction fetch stage stores the instruction data in an instruction fetch output queue if enough storage is available. Otherwise, instruction fetch stages corresponding to the selected thread are invalidated and refetched to avoid stalling preceding stages in the instruction fetch pipeline, which may be fetching instructions of another thread.

    摘要翻译: 用于分割微处理器流水线以支持流水线分支预测和多个执行线程的指令获取的方法和装置。 线程选择阶段从多个执行线程中选择线程。 在一个实施例中,分支预测输出队列中的存储被预分配给一个分支预测阶段中的线程的一部分,以便防止分支预测流水线中后续阶段的停顿。 在另一个实施例中,指令提取阶段在与所选线程的一部分相对应的获取地址处获取指令。 如果有足够的存储可用,另一个指令获取阶段将指令数据存储在指令提取输出队列中。 否则,与所选线程相对应的指令获取阶段无效并被重新设计,以避免在指令提取流水线中停止前进阶段,这可能是获取另一线程的指令。

    Method and apparatus for partitioned pipelined fetching of multiple execution threads
    66.
    发明申请
    Method and apparatus for partitioned pipelined fetching of multiple execution threads 失效
    用于分割流水线取出多个执行线程的方法和装置

    公开(公告)号:US20080005534A1

    公开(公告)日:2008-01-03

    申请号:US11479345

    申请日:2006-06-29

    IPC分类号: G06F9/30

    CPC分类号: G06F9/3802 G06F9/3851

    摘要: Methods and apparatus for partitioning a microprocessor pipeline to support pipelined branch prediction and instruction fetching of multiple execution threads. A thread selection stage selects a thread from a plurality of execution threads. In one embodiment, storage in a branch prediction output queue is pre-allocated to a portion of the thread in one branch prediction stage in order to prevent stalling of subsequent stages in the branch prediction pipeline. In another embodiment, an instruction fetch stage fetches instructions at a fetch address corresponding to a portion of the selected thread. Another instruction fetch stage stores the instruction data in an instruction fetch output queue if enough storage is available. Otherwise, instruction fetch stages corresponding to the selected thread are invalidated and refetched to avoid stalling preceding stages in the instruction fetch pipeline, which may be fetching instructions of another thread.

    摘要翻译: 用于分割微处理器流水线以支持流水线分支预测和多个执行线程的指令获取的方法和装置。 线程选择阶段从多个执行线程中选择线程。 在一个实施例中,分支预测输出队列中的存储被预分配给一个分支预测阶段中的线程的一部分,以便防止分支预测流水线中后续阶段的停顿。 在另一个实施例中,指令提取阶段在与所选线程的一部分相对应的获取地址处获取指令。 如果有足够的存储可用,另一个指令获取阶段将指令数据存储在指令提取输出队列中。 否则,与所选线程相对应的指令获取阶段无效并被重新设计,以避免在指令提取流水线中停止前进阶段,这可能是获取另一线程的指令。

    Forward-pass dead instruction identification
    67.
    发明申请
    Forward-pass dead instruction identification 失效
    前进死亡指令识别

    公开(公告)号:US20070157007A1

    公开(公告)日:2007-07-05

    申请号:US11323037

    申请日:2005-12-29

    IPC分类号: G06F15/00

    CPC分类号: G06F9/3832 G06F9/3838

    摘要: Apparatuses and methods for dead instruction identification are disclosed. In one embodiment, an apparatus includes an instruction buffer and a dead instruction identifier. The instruction buffer is to store an instruction stream having a single entry point and a single exit point. The dead instruction identifier is to identify dead instructions based on a forward pass through the instruction stream.

    摘要翻译: 公开了用于死指示识别的装置和方法。 在一个实施例中,一种装置包括指令缓冲器和死指令标识符。 指令缓冲器用于存储具有单个入口点和单个出口点的指令流。 死指令标识符是基于通过指令流的向前传递来识别死指令。

    Passing decoded instructions to both trace cache building engine and allocation module operating in trace cache or decoder reading state
    69.
    发明授权
    Passing decoded instructions to both trace cache building engine and allocation module operating in trace cache or decoder reading state 失效
    将解码的指令传递到跟踪高速缓存构建引擎和在跟踪缓存或解码器读取状态中操作的分配模块

    公开(公告)号:US06950924B2

    公开(公告)日:2005-09-27

    申请号:US10032565

    申请日:2002-01-02

    摘要: A system and method of managing processor instructions provides enhanced performance. The system and method provide for decoding a first instruction into a plurality of operations with a decoder. A first copy of the operations is passed from the decoder to a build engine associated with a trace cache. The system and method further provide for passing a second copy of the operation from the decoder directly to a back end allocation module such that the operations bypass the build engine and the allocation module is in a decoder reading state.

    摘要翻译: 管理处理器指令的系统和方法提供增强的性能。 该系统和方法提供用解码器将第一指令解码为多个操作。 操作的第一个副本从解码器传递到与跟踪缓存相关联的构建引擎。 该系统和方法进一步提供将操作的第二副本从解码器直接传递到后端分配模块,使得操作绕过构建引擎并且分配模块处于解码器读取状态。

    Method and apparatus to control steering of instruction streams
    70.
    发明申请
    Method and apparatus to control steering of instruction streams 审中-公开
    控制指令流转向的方法和装置

    公开(公告)号:US20050149696A1

    公开(公告)日:2005-07-07

    申请号:US10745526

    申请日:2003-12-29

    IPC分类号: G06F9/30 G06F9/38

    摘要: Rather than steering one macroinstruction at a time to decode logic in a processor, multiple macroinstructions may be steered at any given time. In one embodiment, a pointer calculation unit generates a pointer that assists in determining a stream of one or more macroinstructions that may be steered to decode logic in the processor.

    摘要翻译: 不是一次指导一个宏指令来解码处理器中的逻辑,而是可以在任何给定的时间引导多个宏指令。 在一个实施例中,指针计算单元产生一个指针,该指针有助于确定一个或多个宏指令的流,该宏指令可被转向以解码处理器中的逻辑。