Scalable cache coherency protocol
    42.
    发明授权

    公开(公告)号:US11868258B2

    公开(公告)日:2024-01-09

    申请号:US18160575

    申请日:2023-01-27

    Applicant: Apple Inc.

    CPC classification number: G06F12/0815 G06F12/0831 G06F2212/1032

    Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.

    Scalable Cache Coherency Protocol
    43.
    发明申请

    公开(公告)号:US20230083397A1

    公开(公告)日:2023-03-16

    申请号:US18058105

    申请日:2022-11-22

    Applicant: Apple Inc.

    Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.

    Scalable cache coherency protocol
    46.
    发明授权

    公开(公告)号:US11544193B2

    公开(公告)日:2023-01-03

    申请号:US17315725

    申请日:2021-05-10

    Applicant: Apple Inc.

    Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.

    I/O Agent
    47.
    发明申请

    公开(公告)号:US20220318136A1

    公开(公告)日:2022-10-06

    申请号:US17648071

    申请日:2022-01-14

    Applicant: Apple Inc.

    Abstract: Techniques are disclosed relating to an I/O agent circuit of a computer system. The I/O agent circuit may receive, from a peripheral component, a set of transaction requests to perform a set of read transactions that are directed to one or more of a plurality of cache lines. The I/O agent circuit may issue, to a first memory controller circuit configured to manage access to a first one of the plurality of cache lines, a request for exclusive read ownership of the first cache line such that data of the first cache line is not cached outside of the memory and the I/O agent circuit in a valid state. The I/O agent circuit may receive exclusive read ownership of the first cache line, including receiving the data of the first cache line. The I/O agent circuit may then perform the set of read transactions with respect to the data.

    Parallel coherence and memory cache processing pipelines

    公开(公告)号:US11138111B2

    公开(公告)日:2021-10-05

    申请号:US16129527

    申请日:2018-09-12

    Applicant: Apple Inc.

    Abstract: Systems, apparatuses, and methods for performing coherence processing and memory cache processing in parallel are disclosed. A system includes a communication fabric and a plurality of dual-processing pipelines. Each dual-processing pipeline includes a coherence processing pipeline and a memory cache processing pipeline. The communication fabric forwards a transaction to a given dual-processing pipeline, with the communication fabric selecting the given dual-processing pipeline, from the plurality of dual-processing pipelines, based on a hash of the address of the transaction. The given dual-processing pipeline performs a duplicate tag lookup in parallel with a memory cache tag lookup for the transaction. By performing the duplicate tag lookup and the memory cache tag lookup in a parallel fashion rather than in a serial fashion, latency and power consumption are reduced while performance is enhanced.

    Method and apparatus for ensuring real-time snoop latency

    公开(公告)号:US10795818B1

    公开(公告)日:2020-10-06

    申请号:US16418811

    申请日:2019-05-21

    Applicant: Apple Inc.

    Abstract: Various systems and methods for ensuring real-time snoop latency are disclosed. A system includes a processor and a cache controller. The cache controller receives, via a channel, cache snoop requests from the processor, the snoop requests including latency-sensitive and non-latency sensitive requests. Requests are not prioritized by type within the channel. The cache controller limits a number of non-latency sensitive snoop requests that can be processed ahead of an incoming latency-sensitive snoop requests. Limiting the number of non-latency sensitive snoop requests that can be processed ahead of an incoming latency-sensitive snoop request includes the cache controller determining that the number of received non-latency sensitive snoop requests has reached a predetermined value and responsively prioritizing latency-sensitive requests over non-latency sensitive requests.

    Coherence flows for dual-processing pipelines

    公开(公告)号:US10678691B2

    公开(公告)日:2020-06-09

    申请号:US16124713

    申请日:2018-09-07

    Applicant: Apple Inc.

    Abstract: Systems, apparatuses, and methods for implementing coherence flows for dual-processing coherence and memory cache pipelines are disclosed. A dual-processing pipeline includes a coherence processing pipeline and a memory cache processing pipeline. When a transaction is issued to the dual-processing pipeline, the coherence processing pipeline performs a duplicate tag lookup in parallel with the memory cache processing pipeline performing a memory cache tag lookup for the transaction. If the duplicate tag lookup is a hit, then the coherence processing pipeline locks the matching entry, the memory cache processing pipeline discards the original transaction, and a copyback request is sent to a coherent agent identified by the matching entry. When the copyback response is received by a communication fabric, the copyback response is issued to the memory cache processing pipeline. When the copyback response passes the global ordering point, the coherence processing pipeline clears the lock on the matching entry.

Patent Agency Ranking