A PSEUDO LRU TREE-BASED PRIORITY CACHE
    1.
    发明申请
    A PSEUDO LRU TREE-BASED PRIORITY CACHE 失效
    基于PSEUDO LRU TREE的优先级高速缓存

    公开(公告)号:US20080010415A1

    公开(公告)日:2008-01-10

    申请号:US11428581

    申请日:2006-07-05

    IPC分类号: G06F12/00

    摘要: Exemplary embodiments include a method for updating an Cache LRU tree including: receiving a new cache line; traversing the Cache LRU tree, the Cache LRU tree including a plurality of nodes; biasing a selection the victim line toward those lines with relatively low priorities from the plurality of lines; and replacing a cache line with a relatively low priority with the new cache line.

    摘要翻译: 示例性实施例包括用于更新Cache LRU树的方法,包括:接收新的高速缓存行; 遍历Cache LRU树,包括多个节点的Cache LRU树; 将受害者线路的选择偏向来自多条线路的具有相对较低优先级的线路; 并用新的高速缓存行替换具有较低优先级的高速缓存行。

    Updating a node-based cache LRU tree
    2.
    发明授权
    Updating a node-based cache LRU tree 失效
    更新基于节点的缓存LRU树

    公开(公告)号:US07512739B2

    公开(公告)日:2009-03-31

    申请号:US11428581

    申请日:2006-07-05

    IPC分类号: G06F12/00

    摘要: Exemplary embodiments include a method for updating an Cache LRU tree including: receiving a new cache line; traversing the Cache LRU tree, the Cache LRU tree including a plurality of nodes; biasing a selection the victim line toward those lines with relatively low priorities from the plurality of lines; and replacing a cache line with a relatively low priority with the new cache line.

    摘要翻译: 示例性实施例包括用于更新Cache LRU树的方法,包括:接收新的高速缓存行; 遍历Cache LRU树,包括多个节点的Cache LRU树; 将受害者线路的选择偏向来自多条线路的具有相对较低优先级的线路; 并用新的高速缓存行替换具有较低优先级的高速缓存行。

    Weighted-region cycle accounting for multi-threaded processor cores
    3.
    发明授权
    Weighted-region cycle accounting for multi-threaded processor cores 失效
    加权区域循环计算多线程处理器内核

    公开(公告)号:US08161493B2

    公开(公告)日:2012-04-17

    申请号:US12173771

    申请日:2008-07-15

    IPC分类号: G06F9/45 G06F9/46

    摘要: An aspect of the present invention improves the accuracy of measuring processor utilization of multi-threaded cores by providing a calibration facility that derives utilization in the context of the overall dynamic operating state of the core by assigning weights to idle threads and assigning weights to run threads, depending on the status of the core. From previous chip designs it has been established in a Simultaneous Multi Thread (SMT) core that not all idle cycles in a hardware thread can be equally converted into useful work. Competition for core resources reduces the conversion efficiency of one thread's idle cycles when any other thread is running on the same core.

    摘要翻译: 本发明的一个方面通过提供一种校准设备来提高测量多线程核心处理器利用率的准确性,该校准设备通过向空闲线程分配权重并为运行线程分配权重而在核心的整体动态操作状态的上下文中获得利用 ,取决于核心的状态。 从先前的芯片设计,已经建立在同步多线程(SMT)核心中,并非硬件线程中的所有空闲周期都可以平等地转换为有用的工作。 核心资源的竞争降低了一个线程在同一个核心上运行的一个线程的空闲周期的转换效率。

    DEVICE FOR AND METHOD OF WEIGHTED-REGION CYCLE ACCOUNTING FOR MULTI-THREADED PROCESSOR CORES
    4.
    发明申请
    DEVICE FOR AND METHOD OF WEIGHTED-REGION CYCLE ACCOUNTING FOR MULTI-THREADED PROCESSOR CORES 失效
    用于多线加工器的加权区域循环会计的装置和方法

    公开(公告)号:US20100287561A1

    公开(公告)日:2010-11-11

    申请号:US12173771

    申请日:2008-07-15

    IPC分类号: G06F9/46

    摘要: An aspect of the present invention improves the accuracy of measuring processor utilization of multi-threaded cores by providing a calibration facility that derives utilization in the context of the overall dynamic operating state of the core by assigning weights to idle threads and assigning weights to run threads, depending on the status of the core. From previous chip designs it has been established in a Simultaneous Multi Thread (SMT) core that not all idle cycles in a hardware thread can be equally converted into useful work. Competition for core resources reduces the conversion efficiency of one thread's idle cycles when any other thread is running on the same core.

    摘要翻译: 本发明的一个方面通过提供一种校准设备来提高测量多线程核心处理器利用率的准确性,该校准设备通过向空闲线程分配权重并为运行线程分配权重而在核心的整体动态操作状态的上下文中获得利用 ,取决于核心的状态。 从先前的芯片设计,已经建立在同步多线程(SMT)核心中,并非硬件线程中的所有空闲周期都可以平等地转换为有用的工作。 核心资源的竞争降低了一个线程在同一个核心上运行的一个线程的空闲周期的转换效率。

    Cache memory, processing unit, data processing system and method for assuming a selected invalid coherency state based upon a request source
    5.
    发明授权
    Cache memory, processing unit, data processing system and method for assuming a selected invalid coherency state based upon a request source 有权
    高速缓冲存储器,处理单元,数据处理系统和方法,用于基于请求源假设所选择的无效一致性状态

    公开(公告)号:US07237070B2

    公开(公告)日:2007-06-26

    申请号:US11109085

    申请日:2005-04-19

    IPC分类号: G06F13/00

    摘要: At a first cache memory affiliated with a first processor core, an exclusive memory access operation is received via an interconnect fabric coupling the first cache memory to second and third cache memories respectively affiliated with second and third processor cores. The exclusive memory access operation specifies a target address. In response to receipt of the exclusive memory access operation, the first cache memory detects presence or absence of a source indication indicating that the exclusive memory access operation originated from the second cache memory to which the first cache memory is coupled by a private communication network to which the third cache memory is not coupled. In response to detecting presence of the source indication, a coherency state field of the first cache memory that is associated with the target address is updated to a first data-invalid state. In response to detecting absence of the source indication, the coherency state field of the first cache memory is updated to a different second data-invalid state.

    摘要翻译: 在与第一处理器核心相关联的第一高速缓冲存储器处,通过将第一高速缓冲存储器耦合到分别隶属于第二和第三处理器核的第二和第三高速缓冲存储器的互连结构接收独占存储器存取操作。 独占内存访问操作指定目标地址。 响应于独占存储器访问操作的接收,第一高速缓存存储器检测是否存在指示来自第一高速缓存存储器的专用存储器访问操作的源指示由第一高速缓存存储器通过专用通信网络耦合到 第三缓存存储器未被耦合。 响应于检测到源指示的存在,与目标地址相关联的第一高速缓冲存储器的一致性状态字段被更新为第一数据无效状态。 响应于检测到不存在源指示,将第一高速缓冲存储器的一致性状态字段更新为不同的第二数据无效状态。

    Dynamic inclusive policy in a hybrid cache hierarchy using hit rate
    6.
    发明授权
    Dynamic inclusive policy in a hybrid cache hierarchy using hit rate 失效
    使用命中率的混合缓存层次结构中的动态包容性策略

    公开(公告)号:US08788757B2

    公开(公告)日:2014-07-22

    申请号:US13315381

    申请日:2011-12-09

    IPC分类号: G06F13/28

    摘要: A mechanism is provided for dynamic cache allocation using a cache hit rate. A first cache hit rate is monitored in a first subset utilizing a first allocation policy of N sets of a lower level cache. A second cache hit rate is also monitored in a second subset utilizing a second allocation policy different from the first allocation policy of the N sets of the lower level cache. A periodic comparison of the first cache hit rate to the second cache hit rate is made to identify a third allocation policy for a third subset of the N-sets of the lower level cache. The third allocation policy for the third subset is then periodically adjusted to at least one of the first allocation policy or the second allocation policy based on the comparison of the first cache hit rate to the second cache hit rate.

    摘要翻译: 提供了一种用于使用高速缓存命中率进行动态高速缓存分配的机制。 使用N组较低级高速缓存的第一分配策略,在第一子集中监视第一高速缓存命中率。 利用与下一级高速缓存的N组的第一分配策略不同的第二分配策略,也在第二子集中监视第二高速缓存命中率。 进行第一高速缓存命中率与第二高速缓存命中率的周期性比较,以识别下级高速缓存的N组的第三子集的第三分配策略。 然后,基于第一高速缓存命中率与第二高速缓存命中率的比较,将第三子集的第三分配策略周期性地调整为第一分配策略或第二分配策略中的至少一个。

    Dynamic Inclusive Policy in a Hybrid Cache Hierarchy Using Hit Rate
    8.
    发明申请
    Dynamic Inclusive Policy in a Hybrid Cache Hierarchy Using Hit Rate 失效
    使用命中率的混合缓存层次结构中的动态包容性策略

    公开(公告)号:US20130151777A1

    公开(公告)日:2013-06-13

    申请号:US13315381

    申请日:2011-12-09

    IPC分类号: G06F12/08

    摘要: A mechanism is provided for dynamic cache allocation using a cache hit rate. A first cache hit rate is monitored in a first subset utilizing a first allocation policy of N sets of a lower level cache. A second cache hit rate is also monitored in a second subset utilizing a second allocation policy different from the first allocation policy of the N sets of the lower level cache. A periodic comparison of the first cache hit rate to the second cache hit rate is made to identify a third allocation policy for a third subset of the N-sets of the lower level cache. The third allocation policy for the third subset is then periodically adjusted to at least one of the first allocation policy or the second allocation policy based on the comparison of the first cache hit rate to the second cache hit rate.

    摘要翻译: 提供了一种用于使用高速缓存命中率进行动态高速缓存分配的机制。 使用N组较低级高速缓存的第一分配策略,在第一子集中监视第一高速缓存命中率。 利用与下一级高速缓存的N组的第一分配策略不同的第二分配策略,也在第二子集中监视第二高速缓存命中率。 进行第一高速缓存命中率与第二高速缓存命中率的周期性比较,以识别下级高速缓存的N组的第三子集的第三分配策略。 然后,基于第一高速缓存命中率与第二高速缓存命中率的比较,将第三子集的第三分配策略周期性地调整为第一分配策略或第二分配策略中的至少一个。

    Weighted history allocation predictor algorithm in a hybrid cache
    9.
    发明授权
    Weighted history allocation predictor algorithm in a hybrid cache 有权
    混合高速缓存中加权历史分配预测算法

    公开(公告)号:US08688915B2

    公开(公告)日:2014-04-01

    申请号:US13315411

    申请日:2011-12-09

    IPC分类号: G06F12/16

    摘要: A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted.

    摘要翻译: 提供了一种用于加权历史分配预测的机制。 对于较低级缓存中的多个成员中的每个成员,相关联的引用计数器基于导致数据被分配给成员的成员位置的操作类型被初始化为初始值。 对于对较低级缓存中的成员的每次访问,相关联的引用计数器递增。 响应于对低级缓存的新数据分配并响应于需要在较低级别高速缓存中另一成员受害的数据的新分配,识别出在其级别缓存中具有最低参考计数值的成员 相关参考计数器。 然后将其相关参考计数器中具有最低参考计数值的成员逐出。

    Victim prefetching in a cache hierarchy
    10.
    发明授权
    Victim prefetching in a cache hierarchy 失效
    受害者在缓存层次结构中预取

    公开(公告)号:US07716424B2

    公开(公告)日:2010-05-11

    申请号:US10989997

    申请日:2004-11-16

    IPC分类号: G06F12/08

    摘要: We present a “directory extension” (hereinafter “DX”) to aid in prefetching between proximate levels in a cache hierarchy. The DX may maintain (1) a list of pages which contains recently ejected lines from a given level in the cache hierarchy, and (2) for each page in this list, the identity of a set of ejected lines, provided these lines are prefetchable from, for example, the next level of the cache hierarchy. Given a cache fault to a line within a page in this list, other lines from this page may then be prefetched without the substantial overhead to directory lookup which would otherwise be required.

    摘要翻译: 我们提出一个“目录扩展名”(以下简称“DX”)来辅助缓存层级中的邻近级别之间的预取。 DX可以维护(1)包含最近从缓存层级中的给定级别排出的行的页面列表,以及(2)对于该列表中的每个页面,提供这些行是可预取的集合的标识 从例如缓存层次结构的下一级。 给定列表中页面内的行的高速缓存错误,然后可以预取此页面中的其他行,而不需要大量额外的目录查找开销。