DYNAMIC INCLUSIVE LAST LEVEL CACHE

    公开(公告)号:US20220197797A1

    公开(公告)日:2022-06-23

    申请号:US17130676

    申请日:2020-12-22

    Abstract: An embodiment of an integrated circuit may comprise a core, and a cache controller coupled to the core, the cache controller including circuitry to identify data from a working set for dynamic inclusion in a next level cache based on an amount of re-use of the next level cache, send a shared copy of the identified data to a requesting core of one or more processor cores, and maintain a copy of the identified data in the next level cache. Other embodiments are disclosed and claimed.

    MEMORY-EFFICIENT LAST LEVEL CACHE ARCHITECTURE

    公开(公告)号:US20180203799A1

    公开(公告)日:2018-07-19

    申请号:US15408731

    申请日:2017-01-18

    Abstract: A memory-efficient last level cache (LLC) architecture is described. A processor implementing a LLC architecture may include a processor core, a last level cache (LLC) operatively coupled to the processor core, and a cache controller operatively coupled to the LLC. The cache controller is to monitor a bandwidth demand of a channel between the processor core and a dynamic random-access memory (DRAM) device associated with the LLC. The cache controller is further to perform a first defined number of consecutive reads from the DRAM device when the bandwidth demand exceeds a first threshold value and perform a first defined number of consecutive writes of modified lines from the LLC to the DRAM device when the bandwidth demand exceeds the first threshold value.

    DEVICE, SYSTEM AND METHOD FOR PROVIDING A HIGH AFFINITY SNOOP FILTER

    公开(公告)号:US20230305960A1

    公开(公告)日:2023-09-28

    申请号:US17705015

    申请日:2022-03-25

    CPC classification number: G06F12/0815

    Abstract: Techniques and mechanisms for efficiently providing access to cached data. In an embodiment, a cache coherency engine comprises circuitry to provide a snoop filter which stores entries each corresponding to a respective line of one or more caches. The one or more caches comprise a first cache which includes a first set, and the snoop filter includes a first plurality of sets which are each configured to be available to represent a line of the first set. In another embodiment, the one or more caches comprise multiple caches which each comprise a respective first set, wherein, for each set of the first plurality of sets, any line in the multiple caches which is to be represented by that each set is to be a line in the respective first sets of the multiple caches.

    APPARATUSES, METHODS, AND SYSTEMS FOR DYNAMIC BYPASSING OF LAST LEVEL CACHE

    公开(公告)号:US20210303467A1

    公开(公告)日:2021-09-30

    申请号:US16833304

    申请日:2020-03-27

    Abstract: Systems, methods, and apparatuses relating to circuitry to implement dynamic bypassing of last level cache are described. In one embodiment, a hardware processor includes a cache to store a plurality of cache lines of data, a processing element to generate a memory request and mark the memory request with a reuse hint value, and a cache controller circuit to mark a corresponding cache line in the cache as more recently used when the memory request is a read request that is a hit in the cache and the reuse hint value is a first value, and mark the corresponding cache line in the cache as less recently used when the memory request is the read request that is the hit in the cache and the reuse hint value is a second, different value.

    MEMORY-EFFICIENT LAST LEVEL CACHE ARCHITECTURE

    公开(公告)号:US20190243760A1

    公开(公告)日:2019-08-08

    申请号:US16222788

    申请日:2018-12-17

    Abstract: A memory-efficient last level cache (LLC) architecture is described. A processor implementing a LLC architecture may include a processor core, a last level cache (LLC) operatively coupled to the processor core, and a cache controller operatively coupled to the LLC. The cache controller is to monitor a bandwidth demand of a channel between the processor core and a dynamic random-access memory (DRAM) device associated with the LLC. The cache controller is further to perform a first defined number of consecutive reads from the DRAM device when the bandwidth demand exceeds a first threshold value and perform a first defined number of consecutive writes of modified lines from the LLC to the DRAM device when the bandwidth demand exceeds the first threshold value.

Patent Agency Ranking