CACHE TO CACHE DATA TRANSFER ACCELERATION TECHNIQUES

    公开(公告)号:US20190179758A1

    公开(公告)日:2019-06-13

    申请号:US15839662

    申请日:2017-12-12

    Abstract: Systems, apparatuses, and methods for accelerating cache to cache data transfers are disclosed. A system includes at least a plurality of processing nodes and prediction units, an interconnect fabric, and a memory. A first prediction unit is configured to receive memory requests generated by a first processing node as the requests traverse the interconnect fabric on the path to memory. When the first prediction unit receives a memory request, the first prediction unit generates a prediction of whether data targeted by the request is cached by another processing node. The first prediction unit is configured to cause a speculative probe to be sent to a second processing node responsive to predicting that the data targeted by the memory request is cached by the second processing node. The speculative probe accelerates the retrieval of the data from the second processing node if the prediction is correct.

    TIERED MEMORY CACHING
    23.
    发明公开

    公开(公告)号:US20240220415A1

    公开(公告)日:2024-07-04

    申请号:US18091140

    申请日:2022-12-29

    CPC classification number: G06F12/0897 G06F2212/1016

    Abstract: The disclosed computer-implemented method includes locating, from a processor storage, a partial tag corresponding to a memory request for a line stored in a memory having a tiered memory cache and in response to a partial tag hit for the memory request, locating, from a partition of the tiered memory cache indicated by the partial tag, a full tag for the line. The method also includes fetching, in response to a full tag hit, the requested line from the partition of the tiered memory cache. Various other methods, systems, and computer-readable media are also disclosed.

    REGION BASED SPLIT-DIRECTORY SCHEME TO ADAPT TO LARGE CACHE SIZES

    公开(公告)号:US20220237117A1

    公开(公告)日:2022-07-28

    申请号:US17721809

    申请日:2022-04-15

    Abstract: Systems, apparatuses, and methods for maintaining region-based cache directories split between node and memory are disclosed. The system with multiple processing nodes includes cache directories split between the nodes and memory to help manage cache coherency among the nodes' cache subsystems. In order to reduce the number of entries in the cache directories, the cache directories track coherency on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. Each processing node includes a node-based cache directory to track regions which have at least one cache line cached in any cache subsystem in the node. The node-based cache directory includes a reference count field in each entry to track the aggregate number of cache lines that are cached per region. The memory-based cache directory includes entries for regions which have an entry stored in any node-based cache directory of the system.

    SCALABLE REGION-BASED DIRECTORY
    26.
    发明申请

    公开(公告)号:US20220100672A1

    公开(公告)日:2022-03-31

    申请号:US17033212

    申请日:2020-09-25

    Abstract: Disclosed is a cache directory including one or more cache directories configurable to interchange within each cache directory entry at least one bit between a first field and a second field to change the size of the region of memory represented and the number of cache lines tracked in the cache subsystem.

    Accelerating accesses to private regions in a region-based cache directory scheme

    公开(公告)号:US10922237B2

    公开(公告)日:2021-02-16

    申请号:US16129022

    申请日:2018-09-12

    Abstract: Systems, apparatuses, and methods for accelerating accesses to private regions in a region-based cache directory scheme are disclosed. A system includes multiple processing nodes, one or more memory devices, and one or more region-based cache directories to manage cache coherence among the nodes' cache subsystems. Region-based cache directories track coherence on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. The cache directory entries for regions that are only accessed by a single node are cached locally at the node. Updates to the reference count for these entries are made locally rather than sending updates to the cache directory. When a second node accesses a first node's private region, the region is now considered shared, and the entry for this region is transferred from the first node back to the cache directory.

    ACCELERATING ACCESSES TO PRIVATE REGIONS IN A REGION-BASED CACHE DIRECTORY SCHEME

    公开(公告)号:US20200081844A1

    公开(公告)日:2020-03-12

    申请号:US16129022

    申请日:2018-09-12

    Abstract: Systems, apparatuses, and methods for accelerating accesses to private regions in a region-based cache directory scheme are disclosed. A system includes multiple processing nodes, one or more memory devices, and one or more region-based cache directories to manage cache coherence among the nodes' cache subsystems. Region-based cache directories track coherence on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. The cache directory entries for regions that are only accessed by a single node are cached locally at the node. Updates to the reference count for these entries are made locally rather than sending updates to the cache directory. When a second node accesses a first node's private region, the region is now considered shared, and the entry for this region is transferred from the first node back to the cache directory.

Patent Agency Ranking