PROVIDING REFERENCES TO PREVIOUSLY DECODED INSTRUCTIONS OF RECENTLY-PROVIDED INSTRUCTIONS TO BE EXECUTED BY A PROCESSOR

    公开(公告)号:US20170277536A1

    公开(公告)日:2017-09-28

    申请号:US15079875

    申请日:2016-03-24

    Abstract: Providing references to previously decoded instructions of recently-provided instructions to be executed by a processor is disclosed herein. In one aspect, a low resource micro-operation controller is provided. Responsive to an instruction pipeline receiving an instruction address, the low resource micro-operation controller is configured to determine if the received instruction address corresponds to an instruction address in short history table. Short history table includes instruction addresses of recently-provided instructions having micro-ops in a post-decode queue. If the received instruction address corresponds to an instruction address in short history table, the low resource micro-operation controller is configured to provide reference (e.g., pointer) to the fetch stage that corresponds to an entry in the post-decode queue in which the micro-ops corresponding to the instruction address are stored. Responsive to the decode stage receiving the reference, the low resource micro-operation controller is configured to provide the micro-ops from the post-decode queue for execution.

    Speculative history forwarding in overriding branch predictors, and related circuits, methods, and computer-readable media
    4.
    发明授权
    Speculative history forwarding in overriding branch predictors, and related circuits, methods, and computer-readable media 有权
    重写分支预测器以及相关电路,方法和计算机可读介质中的投机历史转发

    公开(公告)号:US09582285B2

    公开(公告)日:2017-02-28

    申请号:US14223091

    申请日:2014-03-24

    CPC classification number: G06F9/3848 G06F9/3844

    Abstract: Speculative history forwarding in overriding branch predictors, and related circuits, methods, and computer-readable media are disclosed. In one embodiment, a branch prediction circuit including a first branch predictor and a second branch predictor is provided. The first branch predictor generates a first branch prediction for a conditional branch instruction, and the first branch prediction is stored in a first branch prediction history. The first branch prediction is also speculatively forwarded to a second branch prediction history. The second branch predictor subsequently generates a second branch prediction based on the second branch prediction history, including the speculatively forwarded first branch prediction. By enabling the second branch predictor to base its branch prediction on the speculatively forwarded first branch prediction, an accuracy of the second branch predictor may be improved.

    Abstract translation: 公开了覆盖分支预测器以及相关电路,方法和计算机可读介质中的投机历史转发。 在一个实施例中,提供了包括第一分支预测器和第二分支预测器的分支预测电路。 第一分支预测器产生用于条件分支指令的第一分支预测,并且第一分支预测存储在第一分支预测历史中。 第一分支预测也被推测地转发到第二分支预测历史。 第二分支预测器随后基于包括推测性转发的第一分支预测的第二分支预测历史生成第二分支预测。 通过使第二分支预测器能够将其分支预测设置在推测性转发的第一分支预测上,可以提高第二分支预测器的精度。

    Multiple instruction issuance with parallel inter-group and intra-group picking

    公开(公告)号:US10089114B2

    公开(公告)日:2018-10-02

    申请号:US15086052

    申请日:2016-03-30

    Abstract: A scheduler with a picker block capable of dispatching multiple instructions per cycle is disclosed. The picker block may comprise an inter-group picker and an intra-group picker. The inter-group picker may be configured to pick multiple ready groups when there are two or more ready groups among a plurality of groups of instructions, and pick a single ready group when the single ready group is the only ready group among the plurality of groups. The intra-group picker may be configured to pick one ready instruction from each of the multiple ready groups when the inter-group picker picks the multiple ready groups, and to pick multiple ready instructions from the single ready group when the inter-group picker picks the single ready group.

    STORING NARROW PRODUCED VALUES FOR INSTRUCTION OPERANDS DIRECTLY IN A REGISTER MAP IN AN OUT-OF-ORDER PROCESSOR
    9.
    发明申请
    STORING NARROW PRODUCED VALUES FOR INSTRUCTION OPERANDS DIRECTLY IN A REGISTER MAP IN AN OUT-OF-ORDER PROCESSOR 审中-公开
    在非订单处理者的注册地图中直接存储指令操作的生成值

    公开(公告)号:US20170046154A1

    公开(公告)日:2017-02-16

    申请号:US14860032

    申请日:2015-09-21

    CPC classification number: G06F9/30112 G06F9/3838 G06F9/384 G06F9/3857

    Abstract: Storing narrow produced values for instruction operands directly in a register map in an out-of-order processor (OoP) is provided. An OoP is provided that includes an instruction processing system. The instruction processing system includes a number of instruction processing stages configured to pipeline the processing and execution of instructions according to a dataflow execution. The instruction processing system also includes a register map table (RMT) configured to store address pointers mapping logical registers to physical registers in a physical register file (PRF) for storing produced data for use by consumer instructions without overwriting logical registers for later executed, out-of-order instructions. In certain aspects, the instruction processing system is configured to write back (i.e., store) narrow values produced by executed instructions directly into the RMT, as opposed to writing the narrow produced values into the PRF in a write back stage.

    Abstract translation: 提供将指令操作数的窄生成值直接存储在乱序处理器(OoP)的寄存器映射中。 提供了包括指令处理系统的OoP。 指令处理系统包括多个指令处理阶段,其被配置为根据数据流执行流水线处理和执行指令。 指令处理系统还包括寄存器映射表(RMT),其被配置为存储将逻辑寄存器映射到物理寄存器文件(PRF)中的物理寄存器的地址指针,用于存储由消费者指令使用的产生数据,而不覆盖用于稍后执行的逻辑寄存器 订单说明。 在某些方面,指令处理系统被配置为将由执行的指令产生的窄值直接写入(即存储)到RMT中,而不是在写回阶段将窄的产生值写入PRF。

    Providing load address predictions using address prediction tables based on load path history in processor-based systems

    公开(公告)号:US11709679B2

    公开(公告)日:2023-07-25

    申请号:US15087069

    申请日:2016-03-31

    CPC classification number: G06F9/3832

    Abstract: Aspects disclosed in the detailed description include providing load address predictions using address prediction tables based on load path history in processor-based systems. In one aspect, a load address prediction engine provides a load address prediction table containing multiple load address prediction table entries. Each load address prediction table entry includes a predictor tag field and a memory address field for a load instruction. The load address prediction engine generates a table index and a predictor tag based on an identifier and a load path history for a detected load instruction. The table index is used to look up a corresponding load address prediction table entry. If the predictor tag matches the predictor tag field of the load address prediction table entry corresponding to the table index, the memory address field of the load address prediction table entry is provided as a predicted memory address for the load instruction.

Patent Agency Ranking