DYNAMIC CACHE COHERENCE PROTOCOL BASED ON RUNTIME INTERCONNECT UTILIZATION

    公开(公告)号:US20240303195A1

    公开(公告)日:2024-09-12

    申请号:US18562743

    申请日:2021-12-15

    CPC classification number: G06F12/0835 G06F12/084 G06F12/0891

    Abstract: In one embodiment, a processor includes interconnect circuitry, processing circuitry, a first cache, and cache controller circuitry. The interconnect circuitry communicates over a processor interconnect with a second processor that includes a second cache. The processing circuitry generates a memory read request for a corresponding memory address of a memory. Based on the memory read request, the cache controller circuitry detects a cache miss in the first cache, which indicates that the first cache does not contain a valid copy of data for the corresponding memory address. Based on the cache miss, the cache controller circuitry requests the data from the second cache or the memory based on a current bandwidth utilization of the processor interconnect.

    SYSTEM CACHE ARCHITECTURE FOR SUPPORTING MULTIPROCESSOR ARCHITECTURE, AND CHIP

    公开(公告)号:US20240289277A1

    公开(公告)日:2024-08-29

    申请号:US18385812

    申请日:2023-10-31

    Inventor: Sheau Jiung Lee

    CPC classification number: G06F12/0835 G06F12/0842 G06F12/0848

    Abstract: A system cache architecture for supporting a multiprocessor architecture includes: a snooping pipeline switch, at least two cache segments, a memory request arbiter and a coherent interconnect snooping requester. The snooping pipeline switch is connected to a last level memory bus of at least two processors of the multiprocessor architecture, and forwards a memory read or write request from any processor to a memory system by means of the memory request arbiter or sends the memory read or write request to any one of the at least two cache segments; the coherent interconnect snooping requester sends a snooping read or write request from a DMA master to any two cache segment; the at least two cache segments are configured to in response to concurrent read or write requests from the snooping pipeline switch or from the coherent interconnect snooping requester, feed back or update stored cached data.

    CACHING TECHNIQUES USING A TWO-LEVEL READ CACHE

    公开(公告)号:US20240176741A1

    公开(公告)日:2024-05-30

    申请号:US18071798

    申请日:2022-11-30

    CPC classification number: G06F12/0835 G06F12/0882 G06F12/0891

    Abstract: Techniques for processing a read I/O operation that reads first content stored at a target logical address can include: determining, using the target logical address as a first key to index into a first cache, whether the first cache includes a first cache entry caching first metadata used to access a first physical storage location including the first content stored at the target logical address; responsive to determining the first cache includes the first cache entry, determining, using the first metadata as a second key to index into a second cache, whether the second cache includes a second cache entry caching the first content stored at the target logical address; and responsive to determining the second cache includes the second entry, returning the first content from the second entry of the second cache in response to the read I/O operation.

    Coherency Domain Cacheline State Tracking
    5.
    发明公开

    公开(公告)号:US20240037038A1

    公开(公告)日:2024-02-01

    申请号:US18478621

    申请日:2023-09-29

    CPC classification number: G06F12/0835 G06F12/0817 G06F13/4234

    Abstract: Circuitry, systems, and methods are provided for an integrated circuit including an acceleration function unit to provide hardware acceleration for a host device. The integrated circuit may also include interface circuitry including a cache coherency bridge/agent including a device cache to resolve coherency with a host cache of the host device. The interface circuitry may also include cacheline state tracker circuitry to track states of cachelines of the device cache and the host cache. The cacheline state tracker circuitry provides insights to expected state changes based on states of the cachelines of the device cache, the host cache, and a type of operation performed.

    INTERLEAVED CACHE PREFETCHING
    6.
    发明公开

    公开(公告)号:US20230205701A1

    公开(公告)日:2023-06-29

    申请号:US18117820

    申请日:2023-03-06

    Abstract: A method includes receiving, at a direct memory access (DMA) controller of a memory device, a first command from a first cache controller coupled to the memory device to prefetch first data from the memory device and sending the prefetched first data, in response to receiving the first command, to a second cache controller coupled to the memory device. The method can further include receiving a second command from a second cache controller coupled to the memory device to prefetch second data from the memory device, and sending the prefetched second data, in response to receiving the second command, to a third cache controller coupled to the memory device.

Patent Agency Ranking