Preemptive Flushing of Processing-in-Memory Data Structures

    公开(公告)号:US20250110887A1

    公开(公告)日:2025-04-03

    申请号:US18375018

    申请日:2023-09-29

    Abstract: Preemptive flushing of data involved in executing a processing-in-memory command, from a cache system to main memory that is accessible by a processing-in-memory component, is described. In one example, a system includes an asynchronous flush controller that receives an indication of a subsequent processing-in-memory command to be executed as part of performing a computational task. While earlier commands of the computational task are executed, the asynchronous flush controller evicts or invalidates data elements involved in executing the subsequent processing-in-memory command from the cache system, such that the processing-in-memory command can proceed without stalling.

    Bypassing Cache Directory Lookups for Processing-in-Memory Instructions

    公开(公告)号:US20250110878A1

    公开(公告)日:2025-04-03

    申请号:US18374969

    申请日:2023-09-29

    Abstract: Selectively bypassing cache directory lookups for processing-in-memory instructions is described. In one example, a system maintains information describing a status—clean or dirty—of a memory address, where a dirty status indicates that the memory address is modified in a cache and thus different than the memory address as represented in system memory. A processing-in-memory request involving the memory address is assigned a cache directory bypass bit based on the status of the memory address. The cache directory bypass bit for a processing-in-memory request controls whether a cache directory lookup is performed after the processing-in-memory request is issued by a processor core and before the processing-in-memory request is executed by a processing-in-memory component.

    Condensed Coherence Directory Entries for Processing-in-Memory

    公开(公告)号:US20240211402A1

    公开(公告)日:2024-06-27

    申请号:US18146904

    申请日:2022-12-27

    CPC classification number: G06F12/0817 G06F2212/1016

    Abstract: In accordance with the described techniques for condensed coherence directory entries for processing in memory, a computing device includes a core that includes a cache, a memory that includes multiple banks, a coherence directory that includes a condensed entry indicating that data associated with a memory address and the multiple banks is not stored in the cache, and a cache coherence controller. The cache coherence controller receives a processing-in-memory command to the memory address and performs a single lookup in the coherence directory for the processing-in-memory command based on inclusion of the condensed entry in the coherence directory.

    Speculative Cache Invalidation for Processing-in-Memory Instructions

    公开(公告)号:US20250110886A1

    公开(公告)日:2025-04-03

    申请号:US18374951

    申请日:2023-09-29

    Abstract: Speculative cache invalidation techniques for processing-in-memory instructions are described. In one example, a system includes a cache system including a plurality of cache levels and a cache coherence controller. The cache coherence controller is configured to perform a cache directory lookup using a cache directory. The cache directory lookup is configured to indicate whether data associated with a memory address specified by a processing-in-memory request is valid in memory. The system employs speculative evaluation logic to identify whether the data associated with the processing-in-memory request is stored in the cache system before the processing-in-memory request is transmitted to the cache coherence controller. If the data is stored in the cache system, the cache system locally invalidates or flushes the data to avoid stalling the processing-in-memory request during a cache directory lookup.

    Bypassing cache directory lookups for processing-in-memory instructions

    公开(公告)号:US12265470B1

    公开(公告)日:2025-04-01

    申请号:US18374969

    申请日:2023-09-29

    Abstract: Selectively bypassing cache directory lookups for processing-in-memory instructions is described. In one example, a system maintains information describing a status—clean or dirty—of a memory address, where a dirty status indicates that the memory address is modified in a cache and thus different than the memory address as represented in system memory. A processing-in-memory request involving the memory address is assigned a cache directory bypass bit based on the status of the memory address. The cache directory bypass bit for a processing-in-memory request controls whether a cache directory lookup is performed after the processing-in-memory request is issued by a processor core and before the processing-in-memory request is executed by a processing-in-memory component.

    Cache Directory Lookup Address Augmentation
    6.
    发明公开

    公开(公告)号:US20240330186A1

    公开(公告)日:2024-10-03

    申请号:US18192925

    申请日:2023-03-30

    CPC classification number: G06F12/0817

    Abstract: Cache directory lookup address augmentation techniques are described. In one example, a system includes a cache system including a plurality of cache levels and a cache coherence controller. The cache coherence controller is configured to perform a cache directory lookup using a cache directory. The cache directory lookup is configured to indicate whether data associated with a memory address specified by a memory request is valid in memory. The cache directory lookup is augmented to include an additional memory address based on the memory address.

Patent Agency Ranking