Combining write transactions of a large write

    公开(公告)号:US12210767B2

    公开(公告)日:2025-01-28

    申请号:US17032217

    申请日:2020-09-25

    Abstract: A system for combining write transactions of a large write includes a processor including at least a first die and a second die, and a link coupling the first die and the second die. When a link interface on one die transmits packets to the other die over the link, the link interface identifies, from a queue containing a plurality of write transactions, two or more write transactions in the queue that are candidates for combination based on one or more attributes of each write transaction. The link interface determines whether two or more candidate write transactions are combinable based on a set of conditions. When two or more candidate write transaction are combinable, the link interface combines the candidate write transactions into a single combined write transaction and transmits the combined write transaction. A link interface on the receiving die decodes the combined write transaction and iteratively regenerates the individual write transactions using control information in the combined write transaction.

    Region based split-directory scheme to adapt to large cache sizes

    公开(公告)号:US12158845B2

    公开(公告)日:2024-12-03

    申请号:US17721809

    申请日:2022-04-15

    Abstract: Systems, apparatuses, and methods for maintaining region-based cache directories split between node and memory are disclosed. The system with multiple processing nodes includes cache directories split between the nodes and memory to help manage cache coherency among the nodes' cache subsystems. In order to reduce the number of entries in the cache directories, the cache directories track coherency on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. Each processing node includes a node-based cache directory to track regions which have at least one cache line cached in any cache subsystem in the node. The node-based cache directory includes a reference count field in each entry to track the aggregate number of cache lines that are cached per region. The memory-based cache directory includes entries for regions which have an entry stored in any node-based cache directory of the system.

    CROSS-CHIPLET PERFORMANCE DATA STREAMING
    7.
    发明公开

    公开(公告)号:US20230315657A1

    公开(公告)日:2023-10-05

    申请号:US17710413

    申请日:2022-03-31

    CPC classification number: G06F13/20 G06F2213/40

    Abstract: Methods and systems are disclosed for cross-chiplet performance data streaming. Techniques disclosed include accumulating, by a subservient chiplet, event data associated with an event indicative of a performance aspect of the subservient chiplet; sending, by the subservient chiplet, the event data over a chiplet bus to a master chiplet; and adding, by the master chiplet, the received event data to an event record, the event record containing previously received, from the subservient chiplet over the chiplet bus, event data associated with the event.

    SPECULATIVE HINT-TRIGGERED ACTIVATION OF PAGES IN MEMORY

    公开(公告)号:US20220404978A1

    公开(公告)日:2022-12-22

    申请号:US17895357

    申请日:2022-08-25

    Abstract: Systems, apparatuses, and methods for performing efficient memory accesses for a computing system are disclosed. In various embodiments, a computing system includes a computing resource and a memory controller coupled to a memory device. The computing resource selectively generates a hint that includes a target address of a memory request generated by the processor. The hint is sent outside the primary communication fabric to the memory controller. The hint conditionally triggers a data access in the memory device. When no page in a bank targeted by the hint is open, the memory controller processes the hint by opening a target page of the hint without retrieving data. The memory controller drops the hint if there are other pending requests that target the same page or the target page is already open.

    DIRECT MAPPING MODE FOR ASSOCIATIVE CACHE

    公开(公告)号:US20210406177A1

    公开(公告)日:2021-12-30

    申请号:US17033287

    申请日:2020-09-25

    Abstract: A method of controlling a cache is disclosed. The method comprises receiving a request to allocate a portion of memory to store data. The method also comprises directly mapping a portion of memory to an assigned contiguous portion of the cache memory when the request to allocate a portion of memory to store the data includes a cache residency request that the data continuously resides in cache memory. The method also comprises mapping the portion of memory to the cache memory using associative mapping when the request to allocate a portion of memory to store the data does not include a cache residency request that data continuously resides in the cache memory.

Patent Agency Ranking