Reducing power consumption in a sequential cache
    1.
    发明授权
    Reducing power consumption in a sequential cache 有权
    降低顺序缓存中的功耗

    公开(公告)号:US07457917B2

    公开(公告)日:2008-11-25

    申请号:US11027413

    申请日:2004-12-29

    摘要: In one embodiment, the present invention includes a cache memory, which may be a sequential cache, having multiple banks. Each of the banks includes a data array, a decoder coupled to the data array to select a set of the data array, and a sense amplifier. Only a bank to be accessed may be powered, and in some embodiments early way information may be used to maintain remaining banks in a power reduced state. In some embodiments, clock gating may be used to maintain various components of the cache memory in a power reduced state. Other embodiments are described and claimed.

    摘要翻译: 在一个实施例中,本发明包括高速缓冲存储器,其可以是具有多个存储体的顺序高速缓存。 每个存储体包括数据阵列,耦合到数据阵列的解码器以选择一组数据阵列,以及读出放大器。 只有要访问的存储体可以被供电,并且在一些实施例中,可以使用早期路径信息来维持处于功率降低状态的剩余存储体。 在一些实施例中,可以使用时钟选通来维持处于功率降低状态的高速缓冲存储器的各种组件。 描述和要求保护其他实施例。

    Look ahead LRU array update scheme to minimize clobber in sequentially accessed memory

    公开(公告)号:US20060218351A1

    公开(公告)日:2006-09-28

    申请号:US11414541

    申请日:2006-05-01

    IPC分类号: G06F12/00

    摘要: A high-speed memory management technique that minimizes clobber in sequentially accessed memory, including but not limited to, for example, a trace cache. The method includes selecting a victim set from a sequentially accessed memory; selecting a victim way for the selected victim set; reading a next way pointer from a trace line of a trace currently stored in the selected victim way, if the selected victim way has the next way pointer; and writing a next line of the new trace into the selected victim way over the trace line of the currently stored trace. The method also includes forcing a replacement algorithm of next set to select a victim way of the next set using the next way pointer, if the trace line of the currently stored trace is not an active trace tail line.

    Reducing power consumption in a sequential cache
    4.
    发明申请
    Reducing power consumption in a sequential cache 有权
    降低顺序缓存中的功耗

    公开(公告)号:US20060143382A1

    公开(公告)日:2006-06-29

    申请号:US11027413

    申请日:2004-12-29

    IPC分类号: G06F12/00

    摘要: In one embodiment, the present invention includes a cache memory, which may be a sequential cache, having multiple banks. Each of the banks includes a data array, a decoder coupled to the data array to select a set of the data array, and a sense amplifier. Only a bank to be accessed may be powered, and in some embodiments early way information may be used to maintain remaining banks in a power reduced state. In some embodiments, clock gating may be used to maintain various components of the cache memory in a power reduced state. Other embodiments are described and claimed.

    摘要翻译: 在一个实施例中,本发明包括高速缓冲存储器,其可以是具有多个存储体的顺序高速缓存。 每个存储体包括数据阵列,耦合到数据阵列的解码器以选择一组数据阵列,以及读出放大器。 只有要访问的存储体可以被供电,并且在一些实施例中,可以使用早期路径信息来维持处于功率降低状态的剩余存储体。 在一些实施例中,可以使用时钟选通来维持处于功率降低状态的高速缓冲存储器的各种组件。 描述和要求保护其他实施例。

    Low power cache architecture
    6.
    发明申请
    Low power cache architecture 有权
    低功耗缓存架构

    公开(公告)号:US20050097277A1

    公开(公告)日:2005-05-05

    申请号:US11000054

    申请日:2004-12-01

    IPC分类号: G06F12/08 G06F12/00

    摘要: In a processor cache, cache circuits are mapped into one or more logical modules. Each module may be powered down independently of other modules in response to microinstructions processed by the cache. Power control may be applied on a microinstruction-by-microinstruction basis. Because the microinstructions determine which modules are used, power savings may be achieved by powering down those modules that are not used. A cache layout organization may be modified to distribute a limited number of ways across addressable cache banks. By associating fewer than a total number of ways to a bank (for example, one or two ways), the size of memory clusters within the bank may be reduced. The reduction in this size of the memory cluster contributes reduces the power needed for an address decoder to address sets within the bank.

    摘要翻译: 在处理器高速缓存中,高速缓存电路被映射到一个或多个逻辑模块中。 响应于由高速缓存处理的微指令,每个模块可以独立于其它模块被关闭。 功率控制可以在微指令的基础上应用。 因为微指令决定了哪些模块被使用,所以可以通过关闭那些未使用的模块来实现功率节省。 可以修改高速缓存布局组织以在可寻址缓存组中分布有限数量的方式。 通过将小于总数量的方式与银行相关联(例如,一种或两种方式),可以减少银行内的存储器簇的大小。 存储器簇的这种尺寸的减小有助于减少地址解码器对存储体内的集合进行寻址所需的功率。

    Method and apparatus for a stew-based loop predictor
    7.
    发明申请
    Method and apparatus for a stew-based loop predictor 有权
    一种基于炖菜的循环预测器的方法和装置

    公开(公告)号:US20050138341A1

    公开(公告)日:2005-06-23

    申请号:US10739689

    申请日:2003-12-17

    IPC分类号: G06F9/00 G06F9/32 G06F9/38

    摘要: A method and apparatus for a loop predictor for predicting the end of a loop is disclosed. In one embodiment, the loop predictor may have a predict counter to hold a predict count representing the expected number of times that a predictor stew value will repeat during the execution of a given loop. The loop predictor may also have one or more running counters to hold a count of the times that the stew value has repeated during the execution of the present loop. When the counter values match the predictor may issue a prediction that the loop will end.

    摘要翻译: 公开了一种用于预测环路结束的环路预测器的方法和装置。 在一个实施例中,环路预测器可以具有预测计数器,以保持预测计数,该预测计数表示在给定循环的执行期间预测器炖值将重复的预期次数。 循环预测器还可以具有一个或多个运行计数器,以在执行当前循环期间保持炖煮值重复的次数的计数。 当计数器值匹配时,预测器可以发出循环结束的预测。

    Trace reuse
    8.
    发明申请
    Trace reuse 审中-公开
    跟踪重用

    公开(公告)号:US20060036834A1

    公开(公告)日:2006-02-16

    申请号:US10917582

    申请日:2004-08-13

    IPC分类号: G06F9/30

    CPC分类号: G06F9/3808 G06F9/325

    摘要: A trace management architecture to enable the reuse of uops within one or more repeated traces. More particularly, embodiments of the invention relate to a technique to prevent multiple accesses to various functional units within a trace management architecture by reusing traces or sequences of traces that are repeated during a period of operation of the microprocessor, avoiding performance gaps due to multiple trace cache accesses and increasing the rate at which uops can be executed within a processor.

    摘要翻译: 一种跟踪管理架构,可以在一个或多个重复轨迹中重新使用uops。 更具体地,本发明的实施例涉及通过重复使用在微处理器的操作期间重复的迹线或迹线序列来防止对跟踪管理架构内的各种功能单元的多次访问的技术,从而避免由于多个跟踪而导致的性能差距 高速缓存访​​问并增加可以在处理器内执行uop的速率。

    Method and apparatus for a trace cache trace-end predictor
    9.
    发明申请
    Method and apparatus for a trace cache trace-end predictor 失效
    跟踪缓存跟踪结果预测器的方法和装置

    公开(公告)号:US20050044318A1

    公开(公告)日:2005-02-24

    申请号:US10646033

    申请日:2003-08-22

    IPC分类号: G06F9/38 G06F12/08

    CPC分类号: G06F9/3802 G06F9/3808

    摘要: A method and apparatus for a trace end predictor for a trace cache is disclosed. In one embodiment, the trace end predictor may have one or more buffers to contain a head address for a subsequent trace. The head address may include the way number and set number of the next head, along with partial stew data to support additional execution predictors. The buffers may also include tag data of the current trace's tail address, and may additionally include control bits for determining whether to replace the buffer's contents with information from another trace's tail. Reading the next head address from the trace end predictor, as opposed to reading it from the trace cache array, may reduce certain execution time delays.

    摘要翻译: 公开了一种用于跟踪高速缓存的跟踪结束预测器的方法和装置。 在一个实施例中,跟踪结束预测器可以具有一个或多个缓冲器以包含后续跟踪的头地址。 头部地址可以包括下一个头部的路径编号和编号,以及部分炖菜数据以支持附加的执行预测器。 缓冲器还可以包括当前迹线的尾部地址的标签数据,并且还可以包括用于确定是否用来自另一跟踪尾部的信息替换缓冲器内容的控制位。 从跟踪结束预测器读取下一个头地址,而不是从跟踪高速缓存阵列读取它,可能会减少某些执行时间延迟。