Bypass predictor for an exclusive last-level cache

    公开(公告)号:US11609858B2

    公开(公告)日:2023-03-21

    申请号:US17402492

    申请日:2021-08-13

    Abstract: A system and a method to allocate data to a first cache increments a first counter if a reuse indicator for the data indicates that the data is likely to be reused and decremented the counter if the reuse indicator for the data indicates that the data is likely not to be reused. A second counter is incremented upon eviction of the data from the second cache, which is a higher level cache than the first cache. The data is allocated to the first cache if the value of the first counter is equal to or greater than the first predetermined threshold or the value of the second counter equals zero, and the data is bypassed from the first cache if the value of the first counter is less than the first predetermined threshold and the value of the second counter is not equal to zero.

    Bypass predictor for an exclusive last-level cache

    公开(公告)号:US11113207B2

    公开(公告)日:2021-09-07

    申请号:US16289645

    申请日:2019-02-28

    Abstract: A system and a method to allocate data to a first cache increments a first counter if a reuse indicator for the data indicates that the data is likely to be reused and decremented the counter if the reuse indicator for the data indicates that the data is likely not to be reused. A second counter is incremented upon eviction of the data from the second cache, which is a higher level cache than the first cache. The data is allocated to the first cache if the value of the first counter is equal to or greater than the first predetermined threshold or the value of the second counter equals zero, and the data is bypassed from the first cache if the value of the first counter is less than the first predetermined threshold and the value of the second counter is not equal to zero.

    System and method for adaptive cache replacement with dynamic scaling of leader sets

    公开(公告)号:US10360160B2

    公开(公告)日:2019-07-23

    申请号:US15331803

    申请日:2016-10-21

    Abstract: According to one general aspect, an apparatus may include a cache and a cache replacement unit. The cache may be arranged in a plurality of cache sets each configured to store data. A number of cache sets are designated as leader cache sets and each leader cache set is associated with a first replacement policy or a second replacement policy. The cache replacement unit may be configured to monitor an effectiveness of the first replacement policy and, at least, the second replacement policy to accurately predict cache line replacement. The cache replacement unit may be configured to select the first replacement policy or the second replacement policy to be a dominant replacement policy. The cache replacement unit may be configured to dynamically scale the number of cache sets that are designated as leader cache sets based at least in part upon the effectiveness of the dominant replacement policy.

    Prefetching in a lower level exclusive cache hierarchy

    公开(公告)号:US10963388B2

    公开(公告)日:2021-03-30

    申请号:US16543503

    申请日:2019-08-16

    Abstract: According to one general aspect, an apparatus may include a multi-tiered cache system that includes at least one upper cache tier relatively closer, hierarchically, to a processor and at least one lower cache tier relatively closer, hierarchically, to a system memory. The apparatus may include a memory interconnect circuit hierarchically between the multi-tiered cache system and the system memory. The apparatus may include a prefetcher circuit coupled with a lower cache tier of the multi-tiered cache system, and configured to issue a speculative prefetch request to the memory interconnect circuit for data to be placed into the lower cache tier. The memory interconnect circuit may be configured to cancel the speculative prefetch request if the data exists in an upper cache tier of the multi-tiered cache system.

    Method to avoid cache access conflict between load and fill

    公开(公告)号:US10649900B2

    公开(公告)日:2020-05-12

    申请号:US15900789

    申请日:2018-02-20

    Abstract: According to one general aspect, an apparatus may include a first cache configured to store data. The apparatus may include a second cache configured to, in response to a fill request, supply the first cache with data, and an incoming fill signal. The apparatus may also include an execution circuit configured to, via a load request, retrieve data from the first cache. The first cache may be configured to: derive, from the incoming fill signal, address and timing information associated with the fill request, and based, at least partially, upon the address and timing information, schedule the load request to attempt to avoid a load-fill conflict.

    Coordinated cache management policy for an exclusive cache hierarchy

    公开(公告)号:US10606752B2

    公开(公告)日:2020-03-31

    申请号:US15890240

    申请日:2018-02-06

    Abstract: Embodiments include a method and system for coordinating cache management for an exclusive cache hierarchy. The method and system may include managing, by a coordinated cache logic section, a level three (L3) cache, a level two (L2) cache, and/or a level one (L1) cache. Managing the L3 cache and the L2 cache may include coordinating a cache block replacement policy among the L3 cache and the L2 cache by filtering data with lower reuse probability from data with higher reuse probability. The method and system may include tracking reuse patterns of demand requests separately from reuse patterns of prefetch requests. Accordingly, a coordinated cache management policy may be built across multiple levels of a cache hierarchy, rather than a cache replacement policy within one cache level. Higher-level cache behavior may be used to guide lower-level cache allocation, bringing greater visibility of cache behavior to exclusive last level caches (LLCs).

    Cache replacement policy methods and systems
    9.
    发明授权
    Cache replacement policy methods and systems 有权
    缓存替换策略方法和系统

    公开(公告)号:US09418019B2

    公开(公告)日:2016-08-16

    申请号:US14269032

    申请日:2014-05-02

    CPC classification number: G06F12/121

    Abstract: An embodiment includes a system, comprising: a cache configured to store a plurality of cache lines, each cache line associated with a priority state from among N priority states; and a controller coupled to the cache and configured to: search the cache lines for a cache line with a lowest priority state of the priority states to use as a victim cache line; if the cache line with the lowest priority state is not found, reduce the priority state of at least one of the cache lines; and select a random cache line of the cache lines as the victim cache line if, after performing each of the searching of the cache lines and the reducing of the priority state of at least one cache line K times, the cache line with the lowest priority state is not found. N is an integer greater than or equal to 3; and K is an integer greater than or equal to 1 and less than or equal to N−2.

    Abstract translation: 一个实施例包括一种系统,包括:高速缓存,被配置为存储多个高速缓存行,每个高速缓存行与来自N个优先级状态的优先级状态相关联; 以及控制器,其耦合到所述高速缓存并且被配置为:搜索所述高速缓存行中具有所述优先级状态的最低优先级状态以用作所述高速缓存行的高速缓存行; 如果没有找到具有最低优先级状态的高速缓存行,则减少至少一个高速缓存行的优先级状态; 并选择高速缓存行的随机高速缓存行作为受害缓存行,如果在执行高速缓存行的每个搜索和减少至少一个高速缓存行K的优先级状态K次之后,具有最低优先级的高速缓存行 状态未找到。 N是大于或等于3的整数; 并且K是大于或等于1且小于或等于N-2的整数。

Patent Agency Ranking