Weighted history allocation predictor algorithm in a hybrid cache
    1.
    发明授权
    Weighted history allocation predictor algorithm in a hybrid cache 有权
    混合高速缓存中加权历史分配预测算法

    公开(公告)号:US08688915B2

    公开(公告)日:2014-04-01

    申请号:US13315411

    申请日:2011-12-09

    IPC分类号: G06F12/16

    摘要: A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted.

    摘要翻译: 提供了一种用于加权历史分配预测的机制。 对于较低级缓存中的多个成员中的每个成员,相关联的引用计数器基于导致数据被分配给成员的成员位置的操作类型被初始化为初始值。 对于对较低级缓存中的成员的每次访问,相关联的引用计数器递增。 响应于对低级缓存的新数据分配并响应于需要在较低级别高速缓存中另一成员受害的数据的新分配,识别出在其级别缓存中具有最低参考计数值的成员 相关参考计数器。 然后将其相关参考计数器中具有最低参考计数值的成员逐出。

    Weighted history allocation predictor algorithm in a hybrid cache
    2.
    发明授权
    Weighted history allocation predictor algorithm in a hybrid cache 有权
    混合高速缓存中加权历史分配预测算法

    公开(公告)号:US08930625B2

    公开(公告)日:2015-01-06

    申请号:US13611614

    申请日:2012-09-12

    IPC分类号: G06F12/08

    摘要: A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted.

    摘要翻译: 提供了一种用于加权历史分配预测的机制。 对于较低级缓存中的多个成员中的每个成员,相关联的引用计数器基于导致数据被分配给成员的成员位置的操作类型被初始化为初始值。 对于对较低级缓存中的成员的每次访问,相关联的引用计数器递增。 响应于对低级缓存的新数据分配并响应于需要在较低级别高速缓存中另一成员受害的数据的新分配,识别出在其级别缓存中具有最低参考计数值的成员 相关参考计数器。 然后将其相关参考计数器中具有最低参考计数值的成员逐出。

    Dynamic inclusive policy in a hybrid cache hierarchy using bandwidth
    3.
    发明授权
    Dynamic inclusive policy in a hybrid cache hierarchy using bandwidth 有权
    使用带宽的混合缓存层次结构中的动态包容性策略

    公开(公告)号:US08843707B2

    公开(公告)日:2014-09-23

    申请号:US13315395

    申请日:2011-12-09

    IPC分类号: G06F13/00

    CPC分类号: G06F12/0897 G06F2212/502

    摘要: A mechanism is provided for dynamic cache allocation using bandwidth. A bandwidth between a higher level cache and a lower level cache is monitored. Responsive to bandwidth usage between the higher level cache and the lower level cache being below a predetermined low bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a first allocation policy. Responsive to bandwidth usage between the higher level cache and the lower level cache being above a predetermined high bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a second allocation policy.

    摘要翻译: 提供了一种使用带宽进行动态高速缓存分配的机制。 监视高级缓存和较低级别缓存之间的带宽。 响应于较高级别高速缓存和较低级别高速缓存之间的带宽使用低于预定低带宽阈值,高级缓存和下级高速缓存被设置为根据第一分配策略进行操作。 响应于高级缓存和低级高速缓存之间的带宽使用高于预定高带宽阈值,高级缓存和下级高速缓存被设置为根据第二分配策略进行操作。

    Dynamic inclusive policy in a hybrid cache hierarchy using hit rate
    4.
    发明授权
    Dynamic inclusive policy in a hybrid cache hierarchy using hit rate 失效
    使用命中率的混合缓存层次结构中的动态包容性策略

    公开(公告)号:US08788757B2

    公开(公告)日:2014-07-22

    申请号:US13315381

    申请日:2011-12-09

    IPC分类号: G06F13/28

    摘要: A mechanism is provided for dynamic cache allocation using a cache hit rate. A first cache hit rate is monitored in a first subset utilizing a first allocation policy of N sets of a lower level cache. A second cache hit rate is also monitored in a second subset utilizing a second allocation policy different from the first allocation policy of the N sets of the lower level cache. A periodic comparison of the first cache hit rate to the second cache hit rate is made to identify a third allocation policy for a third subset of the N-sets of the lower level cache. The third allocation policy for the third subset is then periodically adjusted to at least one of the first allocation policy or the second allocation policy based on the comparison of the first cache hit rate to the second cache hit rate.

    摘要翻译: 提供了一种用于使用高速缓存命中率进行动态高速缓存分配的机制。 使用N组较低级高速缓存的第一分配策略,在第一子集中监视第一高速缓存命中率。 利用与下一级高速缓存的N组的第一分配策略不同的第二分配策略,也在第二子集中监视第二高速缓存命中率。 进行第一高速缓存命中率与第二高速缓存命中率的周期性比较,以识别下级高速缓存的N组的第三子集的第三分配策略。 然后,基于第一高速缓存命中率与第二高速缓存命中率的比较,将第三子集的第三分配策略周期性地调整为第一分配策略或第二分配策略中的至少一个。

    Dynamic Inclusive Policy in a Hybrid Cache Hierarchy Using Hit Rate
    6.
    发明申请
    Dynamic Inclusive Policy in a Hybrid Cache Hierarchy Using Hit Rate 失效
    使用命中率的混合缓存层次结构中的动态包容性策略

    公开(公告)号:US20130151777A1

    公开(公告)日:2013-06-13

    申请号:US13315381

    申请日:2011-12-09

    IPC分类号: G06F12/08

    摘要: A mechanism is provided for dynamic cache allocation using a cache hit rate. A first cache hit rate is monitored in a first subset utilizing a first allocation policy of N sets of a lower level cache. A second cache hit rate is also monitored in a second subset utilizing a second allocation policy different from the first allocation policy of the N sets of the lower level cache. A periodic comparison of the first cache hit rate to the second cache hit rate is made to identify a third allocation policy for a third subset of the N-sets of the lower level cache. The third allocation policy for the third subset is then periodically adjusted to at least one of the first allocation policy or the second allocation policy based on the comparison of the first cache hit rate to the second cache hit rate.

    摘要翻译: 提供了一种用于使用高速缓存命中率进行动态高速缓存分配的机制。 使用N组较低级高速缓存的第一分配策略,在第一子集中监视第一高速缓存命中率。 利用与下一级高速缓存的N组的第一分配策略不同的第二分配策略,也在第二子集中监视第二高速缓存命中率。 进行第一高速缓存命中率与第二高速缓存命中率的周期性比较,以识别下级高速缓存的N组的第三子集的第三分配策略。 然后,基于第一高速缓存命中率与第二高速缓存命中率的比较,将第三子集的第三分配策略周期性地调整为第一分配策略或第二分配策略中的至少一个。

    Weighted History Allocation Predictor Algorithm in a Hybrid Cache
    7.
    发明申请
    Weighted History Allocation Predictor Algorithm in a Hybrid Cache 有权
    混合缓存中的加权历史分配预测算法

    公开(公告)号:US20130151779A1

    公开(公告)日:2013-06-13

    申请号:US13315411

    申请日:2011-12-09

    IPC分类号: G06F12/08

    摘要: A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted.

    摘要翻译: 提供了一种用于加权历史分配预测的机制。 对于较低级缓存中的多个成员中的每个成员,相关联的引用计数器基于导致数据被分配给成员的成员位置的操作类型被初始化为初始值。 对于对较低级缓存中的成员的每次访问,相关联的引用计数器递增。 响应于对低级缓存的新数据分配并响应于需要在较低级别高速缓存中另一成员受害的数据的新分配,识别出在其级别缓存中具有最低参考计数值的成员 相关参考计数器。 然后将其相关参考计数器中具有最低参考计数值的成员逐出。

    Dynamic Inclusive Policy in a Hybrid Cache Hierarchy Using Bandwidth
    8.
    发明申请
    Dynamic Inclusive Policy in a Hybrid Cache Hierarchy Using Bandwidth 有权
    使用带宽的混合高速缓存层次结构中的动态包容性策略

    公开(公告)号:US20130151778A1

    公开(公告)日:2013-06-13

    申请号:US13315395

    申请日:2011-12-09

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0897 G06F2212/502

    摘要: A mechanism is provided for dynamic cache allocation using bandwidth. A bandwidth between a higher level cache and a lower level cache is monitored. Responsive to bandwidth usage between the higher level cache and the lower level cache being below a predetermined low bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a first allocation policy. Responsive to bandwidth usage between the higher level cache and the lower level cache being above a predetermined high bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a second allocation policy.

    摘要翻译: 提供了一种使用带宽进行动态高速缓存分配的机制。 监视高级缓存和较低级别缓存之间的带宽。 响应于较高级别高速缓存和较低级别高速缓存之间的带宽使用低于预定低带宽阈值,高级缓存和下级高速缓存被设置为根据第一分配策略进行操作。 响应于高级缓存和低级高速缓存之间的带宽使用高于预定高带宽阈值,高级缓存和下级高速缓存被设置为根据第二分配策略进行操作。

    MEMORY QUEUE HANDLING TECHNIQUES FOR REDUCING IMPACT OF HIGH LATENCY MEMORY OPERATIONS
    9.
    发明申请
    MEMORY QUEUE HANDLING TECHNIQUES FOR REDUCING IMPACT OF HIGH LATENCY MEMORY OPERATIONS 有权
    用于减少高级存储器操作影响的存储器队列处理技术

    公开(公告)号:US20130117513A1

    公开(公告)日:2013-05-09

    申请号:US13290702

    申请日:2011-11-07

    IPC分类号: G06F12/14

    CPC分类号: G06F13/1626 G06F13/16

    摘要: Techniques for handling queuing of memory accesses prevent passing excessive requests that implicate a region of memory subject to a high latency memory operation, such as a memory refresh operation, memory scrubbing or an internal bus calibration event, to a re-order queue of a memory controller. The memory controller includes a queue for storing pending memory access requests, a re-order queue for receiving the requests, and a control logic implementing a queue controller that determines if there is a collision between a received request and an ongoing high-latency memory operation. If there is a collision, then transfer of the request to the re-order queue may be rejected outright, or a count of existing queued operations that collide with the high latency operation may be used to determine if queuing the new request will exceed a threshold number of such operations.

    摘要翻译: 用于处理存储器访问排队的技术防止传递过多的请求,这些请求涉及存储器的区域,这些存储器受到诸如存储器刷新操作,存储器擦除或内部总线校准事件的高等待时间存储器操作到存储器的重新排序队列 控制器。 存储器控制器包括用于存储未决存储器访问请求的队列,用于接收请求的重新排序队列,以及实现队列控制器的控制逻辑,该队列控制器确定接收到的请求和正在进行的高延迟存储器操作之间是否存在冲突 。 如果存在冲突,则将请求转发到重新排序队列可能被直接拒绝,或者可能使用与高等待时间操作相冲突的现有排队操作的计数来确定新请求的队列是否将超过阈值 此类操作的数量。