Processor with multiple execution pipelines using pipe stage state
information to control independent movement of instructions between
pipe stages of an execution pipeline
    11.
    发明授权
    Processor with multiple execution pipelines using pipe stage state information to control independent movement of instructions between pipe stages of an execution pipeline 失效
    具有多个执行管线的处理器,使用管段状态信息来控制执行管线的管段之间的指令的独立移动

    公开(公告)号:US6138230A

    公开(公告)日:2000-10-24

    申请号:US902908

    申请日:1997-07-29

    IPC分类号: G06F9/38 G06F9/00 G06F11/30

    摘要: A microprocessor comprises a plurality of instruction pipelines having a plurality of stages for processing a stream of instructions, circuitry for simultaneously issuing instructions into two or more of the pipelines without regard to whether one of the simultaneously issued instructions has a data dependency on other of the simultaneously issued instructions, detecting circuitry for detecting dependencies between instructions in the pipelines and circuitry for controlling the flow of instructions through the pipelines such that an instruction is not delayed due to a data dependency on another instruction unless the data dependency must be resolved for proper processing of the instruction in its current stage.

    摘要翻译: 微处理器包括具有用于处理指令流的多个级的多条指令流水线,用于同时将指令发布到两条或更多条流水线中的电路,而不考虑同时发出的指令之一是否具有与其他 同时发出的指令,用于检测管道中的指令之间的依赖性的检测电路和用于控制通过管线的指令流的电路,使得由于对另一指令的数据依赖性而不指示指令,否则指令不被延迟,除非必须解决数据依赖性以进行适当的处​​理 的指示在当前阶段。

    Branch processing unit with target cache read prioritization protocol
for handling multiple hits
    12.
    发明授权
    Branch processing unit with target cache read prioritization protocol for handling multiple hits 失效
    分支处理单元,具有用于处理多个命中的目标缓存读取优先级协议

    公开(公告)号:US5835951A

    公开(公告)日:1998-11-10

    申请号:US606770

    申请日:1996-02-27

    申请人: Steven C. McMahan

    发明人: Steven C. McMahan

    IPC分类号: G06F9/38 G06F12/08

    摘要: An up/dn read prioritization protocol is used to select between multiple hits in a set associative cache. Each set has associated with it an up/dn priority bit that controls read prioritization for multiple hits in the set--the up/dn bit designates either (i) up prioritization in which the up direction is used to select the entry with the lowest way number, or (ii) dn prioritization in which the down direction is used to select the entry with the highest way number. For each new entry allocated into the cache, the state of the up/dn priority bit is updated such that, for the next cache access resulting in multiple hits, the read prioritization protocol selects the new entry for output by the cache.

    摘要翻译: 使用up / dn读优先级协议来在集合关联高速缓存中的多个命中之间进行选择。 每个集合都与其相关联的一个up / dn优先级位,用于控制集合中多个命中的读取优先级,up / dn位指定(i)优先级,其中向上方向用于以最低的方式选择条目 数字或(ii)dn优先级,其中向下方向用于选择具有最高路数的条目。 对于分配到高速缓存中的每个新条目,更新up / dn优先级位的状态,使得对于导致多次命中的下一个高速缓存访​​问,读优先级协议选择新条目以供高速缓存输出。

    Data processor having an output terminal with selectable output
impedances
    13.
    发明授权
    Data processor having an output terminal with selectable output impedances 失效
    数据处理器具有可选输出阻抗的输出端

    公开(公告)号:US5162672A

    公开(公告)日:1992-11-10

    申请号:US632901

    申请日:1990-12-24

    IPC分类号: H03K19/00 H03K19/0175

    CPC分类号: H03K19/017581 H03K19/0005

    摘要: A data processor has at least one output terminal which a user of the data processor can vary the output impedance thereof depending upon the application environment of the data processor. A first output buffer of an output buffer stage has a predetermined output impedance and is coupled between an input of the stage and the output terminal. The first output buffer provides a first output terminal impedance. A second output buffer having a lower output impedance then the first output buffer may be selectively coupled in parallel to the first output buffer to reduce the output impedance of the output terminal. The coupling of the output buffers is controlled by a user of the data processor who provides a control input for selecting one of a plurality of predetermined output terminal impedance values.

    摘要翻译: 数据处理器具有至少一个输出端,数据处理器的用户可以根据数据处理器的应用环境来改变其输出阻抗。 输出缓冲级的第一输出缓冲器具有预定的输出阻抗,并且耦合在级的输入端和输出端子之间。 第一输出缓冲器提供第一输出端阻抗。 具有较低输出阻抗的第二输出缓冲器然后与第一输出缓冲器可以被选择性地耦合到第一输出缓冲器以减小输出端子的输出阻抗。 输出缓冲器的耦合由数据处理器的用户控制,数据处理器的用户提供用于选择多个预定输出端子阻抗值之一的控制输入。

    Data processor integrated circuit with selectable
multiplexed/non-multiplexed address and data modes of operation
    14.
    发明授权
    Data processor integrated circuit with selectable multiplexed/non-multiplexed address and data modes of operation 失效
    数据处理器集成电路,具有可选择的复用/非复用地址和数据操作模式

    公开(公告)号:US5086407A

    公开(公告)日:1992-02-04

    申请号:US361539

    申请日:1989-06-05

    IPC分类号: G06F13/36 G06F13/42 G06F15/78

    CPC分类号: G06F13/4208 G06F15/7832

    摘要: A single chip data processor integrated circuit having an input which can be programmed to place the circuit's address and data bus terminals into one of two modes. In a first or multiplexed mode, the circuit's address and data terminals are directly connected and address bits are time division multiplexed with data bits when both are written to external circuitry. In a second or normal mode, the circuit's address and data terminals are not connected and address bits are communicated with the circuit independent of data bits which are communicated with the circuit. No circuitry external to the integrated circuit is required to implement the multiplexed mode. A control portion insures that bit collisions are avoided when the circuit is in the multiplexed mode.

    摘要翻译: 具有可被编程为将电路的地址和数据总线端子置于两种模式之一的输入的单芯片数据处理器集成电路。 在第一或多路复用模式下,电路的地址和数据终端直接连接,并且当两者都写入外部电路时,地址位与数据位进行时分复用。 在第二或正常模式下,电路的地址和数据端子不连接,地址位与电路无关地与电路通信的数据位通信。 需要集成电路外部的电路来实现复用模式。 控制部分确保当电路处于复用模式时避免位冲突。

    Adjusting prefetch size based on source of prefetch address
    15.
    发明授权
    Adjusting prefetch size based on source of prefetch address 失效
    根据预取地址的来源调整预取大小

    公开(公告)号:US5835967A

    公开(公告)日:1998-11-10

    申请号:US607673

    申请日:1996-02-27

    申请人: Steven C. McMahan

    发明人: Steven C. McMahan

    IPC分类号: G06F9/38

    摘要: A prefetch unit is used, in an exemplary embodiment, in a superscalar, superpipelined microprocessor compatible with the x86 instruction set architecture. Normally, the prefetch unit performs split prefetching by generating low and high prefetch addresses in a single clock, with the high prefetch address being generated from the low prefetch address by incrementation. In cases where the low prefetch address is supplied to the prefetch unit too late in a clock period to generate the high prefetch address, such as where a branch instruction is not detected by a branch processing unit so that the target instruction address (i.e., the low prefetch address) is supplied by an address calculation stage, the prefetch unit generates a prefetch request consisting of only the low prefetch address. In an exemplary embodiment each prefetch request is for an 8 byte block of instruction bytes, such that the high prefetch address is generated by adding an 8-bit value to the low prefetch address, and, for low prefetch addresses supplied late, the prefetch unit detects whether the low prefetch address has a �0! in bit position 3, and if so, generates the high prefetch address by toggling the bit position n to a �1! (because the no carry ripple will affect the higher order bits).

    摘要翻译: 在示例性实施例中,在与x86指令集架构兼容的超标量超级流水线微处理器中使用预取单元。 通常,预取单元通过在单个时钟中生成低和高预取地址来执行分割预取,其中通过递增从低预取地址生成高预取地址。 在低预取地址在时钟周期内太晚提供给预取单元以产生高预取地址的情况下,例如分支处理单元未检测到转移指令,使得目标指令地址(即, 低预取地址)由地址计算级提供,预取单元生成仅由低预取地址组成的预取请求。 在示例性实施例中,每个预取请求用于8字节的指令字节块,使得通过将低预取地址相加8位值来生成高预取地址,并且对于较晚提供的低预取地址,预取单元 检测比特位置3中的低预取地址是否为[0],如果是,则通过将比特位置n切换到[1]来生成高预取地址(因为无进位纹波将影响较高位)。

    Address calculation logic including limit checking using carry out to
flag limit violation
    16.
    发明授权
    Address calculation logic including limit checking using carry out to flag limit violation 失效
    地址计算逻辑包括限位检查,使用进行标志限制违规

    公开(公告)号:US5784713A

    公开(公告)日:1998-07-21

    申请号:US27054

    申请日:1993-03-05

    申请人: Steven C. McMahan

    发明人: Steven C. McMahan

    CPC分类号: G06F9/34 G06F9/32 G06F12/1441

    摘要: Address calculation logic in which an adder carry out flags a segment limit violation is used, in an exemplary embodiment, in a 486 type microprocessor. An effective address adder (24) and a three input adder (26) comprise limit checking logic. The three input adder receives on offset (EA�31:0!) and two limit-checking components: the memory reference fetch size and, for the exemplary embodiment, a converted segment limit. Specifically, the segment limit from a segment descriptor is converted such that, for either expand up or expand down segments, when the offset is added to this converted segment limit and (in the case of expand up segments) the fetch size in the three input adder, a limit violation is flagged by the carry out bit of a three input adder. In the exemplary embodiment, the segment is converted at segment load and stored on-chip in a segment descriptor register.

    摘要翻译: 在示例性实施例中,在486型微处理器中使用其中加法器执行标记段限制违例的地址计算逻辑。 有效地址加法器(24)和三输入加法器(26)包括极限检查逻辑。 三输入加法器接收偏移量(EA [31:0])和两个限制检查分量:存储器参考提取大小,对于示例性实施例,转换的段限制。 具体来说,转换段描述符的段限制,以便对于扩展或扩展段,当偏移量被添加到该转换的段限制时(在扩展段的情况下),三个输入中的获取大小 加法器,限位违例由三输入加法器的进位位标志。 在示例性实施例中,段被分段负载转换并被片上存储在段描述符寄存器中。

    Branch processing unit with target cache storing history for predicted
taken branches and history cache storing history for predicted
not-taken branches
    17.
    发明授权
    Branch processing unit with target cache storing history for predicted taken branches and history cache storing history for predicted not-taken branches 失效
    分支处理单元,具有用于预测的分支的目标高速缓存存储历史和用于预测未被分支的历史高速缓存存储历史

    公开(公告)号:US5732253A

    公开(公告)日:1998-03-24

    申请号:US606666

    申请日:1996-02-26

    申请人: Steven C. McMahan

    发明人: Steven C. McMahan

    IPC分类号: G06F9/38 G06F12/08

    摘要: A branch processing unit (BPU) is used, in an exemplary embodiment, in a superscalar, superpipelined microprocessor compatible with the x86 instruction set architecture. The BPU implements a branch prediction scheme using a target cache and a separate history cache. The target cache stores target addressing information and history information for predicted taken branches. The history cache stores history information only for predicted not-taken branches. The exemplary embodiment uses a two-bit prediction algorithm such that the target cache and the history cache need only story a single history bit (to differentiate between strong and weak states of respectively predicted taken and not-taken branches).

    摘要翻译: 在示例性实施例中,在与x86指令集架构兼容的超标量超级流水线微处理器中使用分支处理单元(BPU)。 BPU使用目标缓存和单独的历史缓存来实现分支预测方案。 目标缓存存储预测的分支的目标寻址信息和历史信息。 历史缓存仅存储预测的未被分支的历史信息。 示例性实施例使用两比特预测算法,使得目标高速缓存和历史高速缓存仅需要故障单个历史比特(以区分分别预测的采用和未采用分支的强弱状态)。

    Branch processing unit with target cache using low/high banking to
support split prefetching
    18.
    发明授权
    Branch processing unit with target cache using low/high banking to support split prefetching 失效
    分支处理单元与目标缓存使用低/高银行支持拆分预取

    公开(公告)号:US5732243A

    公开(公告)日:1998-03-24

    申请号:US607675

    申请日:1996-02-28

    申请人: Steven C. McMahan

    发明人: Steven C. McMahan

    IPC分类号: G06F9/38 G06F12/08 G06F12/00

    摘要: A branch processing unit (BPU) is used, in an exemplary embodiment, in a superscalar, superpipelined microprocessor compatible with the x86 instruction set architecture. The BPU includes a target cache organized in banks to support split prefetching. Prefetch requests (addressing a prefetch block of 16 bytes) are separated into low and high block addresses (addressing split blocks of 8 bytes). The low and high block addresses differ in bit position �3! designated a bank select bit, where the low block address of an associated prefetch request may be designated by a �1 or 0! such that a split block associated with a low block address may be allocated into either bank of the target cache (i.e., the low block of a prefetch request can start on an 8 byte alignment rather than the 16 byte alignment). For each prefetch request that includes both low and high block addresses, respective banks of the target cache are successively accessed based on the state of the bank select bit, such that the low block address is used to access one bank and the high block address is used to access the other bank.

    摘要翻译: 在示例性实施例中,在与x86指令集架构兼容的超标量超级流水线微处理器中使用分支处理单元(BPU)。 BPU包括在银行中组织的目标缓存以支持拆分预取。 预取请求(寻址16个字节的预取块)被分为低和高块地址(8字节的寻址分割块)。 指定一个存储区选择位的位位置[3]不同,低位和高位地址不同,其中相关联的预取请求的低块地址可以由[1或0]指定,使得与低块地址相关联的分块 可以被分配到目标高速缓存的任一个存储体中(即,预取请求的低位可以在8字节对齐而不是16字节对齐开始)。 对于包括低和高块地址的每个预取请求,基于存储体选择位的状态来连续地访问目标高速缓存的各个存储体,使得低块地址被用于访问一个存储体,而高块地址是 用于访问另一家银行。