Power saving methods and apparatus to selectively enable cache bits based on known processor state
    1.
    发明申请
    Power saving methods and apparatus to selectively enable cache bits based on known processor state 有权
    省电方法和装置,用于基于已知的处理器状态选择性地启用高速缓存位

    公开(公告)号:US20060200686A1

    公开(公告)日:2006-09-07

    申请号:US11073284

    申请日:2005-03-04

    IPC分类号: G06F1/26

    摘要: A processor capable of fetching and executing variable length instructions is described having instructions of at least two lengths. The processor operates in multiple modes. One of the modes restricts instructions that can be fetched and executed to the longer length instructions. An instruction cache is used for storing variable length instructions and their associated predecode bit fields in an instruction cache line and storing the instruction address and processor operating mode state information at the time of the fetch in a tag line. The processor operating mode state information indicates the program specified mode of operation of the processor. The processor fetches instructions from the instruction cache for execution. As a result of an instruction fetch operation, the instruction cache may selectively enable the writing of predecode bit fields in the instruction cache and may selectively enable the reading of predecode bit fields stored in the instruction cache based on the processor state at the time of the fetch.

    摘要翻译: 描述具有至少两个长度的指令的能够获取和执行可变长度指令的处理器。 处理器以多种模式运行。 其中一种模式限制了可以获取并执行到较长长度指令的指令。 指令高速缓存用于在指令高速缓存行中存储可变长度指令及其相关联的预解码位字段,并且在获取标签行时存储指令地址和处理器操作模式状态信息。 处理器操作模式状态信息指示处理器的程序指定的操作模式。 处理器从指令缓存器中获取指令以执行。 作为指令提取操作的结果,指令高速缓存可以选择性地启用指令高速缓存中的预解码位字段的写入,并且可以基于处理器状态来选择性地启用存储在指令高速缓存中的预解码位字段的读取 取。

    Method and apparatus for predicting branch instructions
    2.
    发明申请
    Method and apparatus for predicting branch instructions 有权
    用于预测分支指令的方法和装置

    公开(公告)号:US20060277397A1

    公开(公告)日:2006-12-07

    申请号:US11144206

    申请日:2005-06-02

    IPC分类号: G06F9/44

    CPC分类号: G06F9/3844

    摘要: A microprocessor includes two branch history tables, and is configured to use a first one of the branch history tables for predicting branch instructions that are hits in a branch target cache, and to use a second one of the branch history tables for predicting branch instructions that are misses in the branch target cache. As such, the first branch history table is configured to have an access speed matched to that of the branch target cache, so that its prediction information is timely available relative to branch target cache hit detection, which may happen early in the microprocessor's instruction pipeline. The second branch history table thus need only be as fast as is required for providing timely prediction information in association with recognizing branch target cache misses as branch instructions, such as at the instruction decode stage(s) of the instruction pipeline.

    摘要翻译: 微处理器包括两个分支历史表,并且被配置为使用第一个分支历史表来预测分支目标高速缓存中的命中的分支指令,并且使用第二个分支历史表来预测分支指令, 在分支目标缓存中丢失。 因此,第一分支历史表被配置为具有与分支目标高速缓存的访问速度匹配的访问速度,使得其预测信息相对于可能在微处理器的指令流水线的早期发生的分支目标高速缓存命中检测而及时可用。 因此,第二分支历史表仅需要与将识别分支目标高速缓存未命中作为分支指令(例如在指令流水线的指令解码阶段)相关联地提供及时的预测信息所需的速度。

    Branch target address cache storing two or more branch target addresses per index
    4.
    发明申请
    Branch target address cache storing two or more branch target addresses per index 审中-公开
    分支目标地址缓存,每个索引存储两个或更多个分支目标地址

    公开(公告)号:US20060218385A1

    公开(公告)日:2006-09-28

    申请号:US11089072

    申请日:2005-03-23

    IPC分类号: G06F9/00

    CPC分类号: G06F9/3806 G06F9/3848

    摘要: A Branch Target Address Cache (BTAC) stores at least two branch target addresses in each cache line. The BTAC is indexed by a truncated branch instruction address. An offset obtained from a branch prediction offset table determines which of the branch target addresses is taken as the predicted branch target address. The offset table may be indexed in several ways, including by a branch history, by a hash of a branch history and part of the branch instruction address, by a gshare value, randomly, in a round-robin order, or other methods.

    摘要翻译: 分支目标地址缓存(BTAC)在每个高速缓存行中存储至少两个分支目标地址。 BTAC由截断的分支指令地址索引。 从分支预测偏移表获得的偏移确定哪个分支目标地址被采用为预测分支目标地址。 偏移表可以通过分支历史,分支历史和部分分支指令地址的散列,以循环次序或其他方法随机的gshare值被索引。

    Method and apparatus for managing cache partitioning
    5.
    发明申请
    Method and apparatus for managing cache partitioning 有权
    用于管理缓存分区的方法和装置

    公开(公告)号:US20070067574A1

    公开(公告)日:2007-03-22

    申请号:US11233575

    申请日:2005-09-21

    IPC分类号: G06F12/00

    CPC分类号: G06F12/126

    摘要: A method of managing cache partitions provides a first pointer for higher priority writes and a second pointer for lower priority writes, and uses the first pointer to delimit the lower priority writes. For example, locked writes have greater priority than unlocked writes, and a first pointer may be used for locked writes, and a second pointer may be used for unlocked writes. The first pointer is advanced responsive to making locked writes, and its advancement thus defines a locked region and an unlocked region. The second pointer is advanced responsive to making unlocked writes. The second pointer also is advanced (or retreated) as needed to prevent it from pointing to locations already traversed by the first pointer. Thus, the pointer delimits the unlocked region and allows the locked region to grow at the expense of the unlocked region.

    摘要翻译: 管理高速缓存分区的方法提供用于较高优先级写入的第一指针和用于较低优先级写入的第二指针,并且使用第一指针来划分较低优先级的写入。 例如,锁定的写入具有比解锁的写入更高的优先级,并且第一指针可以用于锁定的写入,并且第二指针可以用于解锁的写入。 响应于锁定写入,第一指针是高级的,并且其进步因此定义了锁定区域和解锁区域。 响应于解锁写入,第二个指针是高级的。 第二个指针也根据需要进行高级(或撤销),以防止它指向已经被第一个指针所遍历的位置。 因此,指针限定未锁定区域,并允许锁定区域以解锁区域为代价而增长。

    Instruction cache having fixed number of variable length instructions
    6.
    发明申请
    Instruction cache having fixed number of variable length instructions 有权
    指令缓存具有固定数量的可变长度指令

    公开(公告)号:US20070028050A1

    公开(公告)日:2007-02-01

    申请号:US11193547

    申请日:2005-07-29

    IPC分类号: G06F12/00

    摘要: A fixed number of variable-length instructions are stored in each line of an instruction cache. The variable-length instructions are aligned along predetermined boundaries. Since the length of each instruction in the line, and hence the span of memory the instructions occupy, is not known, the address of the next following instruction is calculated and stored with the cache line. Ascertaining the instruction boundaries, aligning the instructions, and calculating the next fetch address are performed in a predecoder prior to placing the instructions in the cache.

    摘要翻译: 固定数量的可变长度指令存储在指令高速缓存的每一行中。 可变长度指令沿预定边界排列。 由于行中的每条指令的长度以及指令占用的存储器的跨度是未知的,所以下一个跟随指令的地址被计算并与高速缓存行一起存储。 在将指令置于高速缓存之前,确定指令边界,对准指令并计算下一个提取地址在预解码器中执行。

    Forward looking branch target address caching
    7.
    发明申请
    Forward looking branch target address caching 审中-公开
    前瞻性分支目标地址缓存

    公开(公告)号:US20060200655A1

    公开(公告)日:2006-09-07

    申请号:US11073283

    申请日:2005-03-04

    IPC分类号: G06F9/00

    摘要: A pipelined processor comprises an instruction cache (iCache), a branch target address cache (BTAC), and processing stages, including a stage to fetch from the iCache and the BTAC. To compensate for the number of cycles needed to fetch a branch target address from the BTAC, the fetch from the BTAC leads the fetch of a branch instruction from the iCache by an amount related to the cycles needed to fetch from the BTAC. Disclosed examples either decrement a write address of the BTAC or increment a fetch address of the BTAC, by an amount essentially corresponding to one less than the cycles needed for a BTAC fetch.

    摘要翻译: 流水线处理器包括指令高速缓存(iCache),分支目标地址缓存(BTAC)和处理阶段,包括从iCache和BTAC获取的阶段。 为了补偿从BTAC获取分支目标地址所需的周期数,从BTAC的提取导致从iCache获取分支指令与从BTAC获取所需的周期相关的量。 所公开的示例可以减少BTAC的写入地址或增加BTAC的获取地址,其量基本上对应于比BTAC提取所需的周期少一个量。

    Method and apparatus for managing a return stack
    9.
    发明申请
    Method and apparatus for managing a return stack 有权
    用于管理返回堆栈的方法和装置

    公开(公告)号:US20060190711A1

    公开(公告)日:2006-08-24

    申请号:US11061975

    申请日:2005-02-18

    IPC分类号: G06F9/00

    摘要: A processor includes a return stack circuit used for predicting procedure return addresses for instruction pre-fetching, wherein a return stack controller determines the number of return levels associated with a given return instruction, and pops that number of return addresses from the return stack. Popping multiple return addresses from the return stack permits the processor to pre-fetch the return address of the original calling procedure in a chain of successive procedure calls. In one embodiment, the return stack controller reads the number of return levels from a value embedded in the return instruction. A complementary compiler calculates the return level values for given return instructions and embeds those values in them at compile-time. In another embodiment, the return stack circuit dynamically tracks the number of return levels by counting the procedure calls (branches) in a chain of successive procedure calls.

    摘要翻译: 处理器包括用于预测用于指令预取的过程返回地址的返回堆栈电路,其中返回堆栈控制器确定与给定返回指令相关联的返回电平的数量,并且从返回堆栈中弹出该返回地址的数量。 从返回堆栈弹出多个返回地址允许处理器在连续的过程调用链中预取原始调用过程的返回地址。 在一个实施例中,返回堆栈控制器从嵌入在返回指令中的值读取返回电平的数量。 补充编译器计算给定返回指令的返回值,并在编译时嵌入这些值。 在另一个实施例中,返回堆栈电路通过对连续过程调用链中的过程调用(分支)进行计数来动态地跟踪返回电平的数量。

    Caching memory attribute indicators with cached memory data field
    10.
    发明申请
    Caching memory attribute indicators with cached memory data field 有权
    使用缓存的内存数据字段缓存内存属性指示器

    公开(公告)号:US20070094475A1

    公开(公告)日:2007-04-26

    申请号:US11254873

    申请日:2005-10-20

    IPC分类号: G06F12/00

    摘要: A processing system may include a memory configured to store data in a plurality of pages, a TLB, and a memory cache including a plurality of cache lines. Each page in the memory may include a plurality of lines of memory. The memory cache may permit, when a virtual address is presented to the cache, a matching cache line to be identified from the plurality of cache lines, the matching cache line having a matching address that matches the virtual address. The memory cache may be configured to permit one or more page attributes of a page located at the matching address to be retrieved from the memory cache and not from the TLB, by further storing in each one of the cache lines a page attribute of the line of data stored in the cache line.

    摘要翻译: 处理系统可以包括被配置为在多个页面中存储数据的存储器,TLB和包括多个高速缓存行的存储器高速缓存。 存储器中的每个页面可以包括多行存储器。 当虚拟地址被呈现给高速缓存时,存储器高速缓存可以允许要从多条高速缓存行识别的匹配高速缓存行,匹配高速缓存行具有与虚拟地址匹配的匹配地址。 存储器高速缓存可以被配置为允许通过在高速缓存行的每一个中存储行的页面属性来允许位于匹配地址的页面的一个或多个页面属性从存储器高速缓存而不是从TLB检索, 的数据存储在缓存行中。