SYSTEMS AND METHODS FOR PROVIDING DISTRIBUTED GLOBAL ORDERING

    公开(公告)号:US20200081837A1

    公开(公告)日:2020-03-12

    申请号:US16125494

    申请日:2018-09-07

    Applicant: Apple Inc.

    Abstract: Systems, apparatuses, and methods for implementing a distributed global ordering point are disclosed. A system includes at least a communication fabric, sequencing logic, and a plurality of coherence point pipelines. Each coherence point pipeline receives transactions from the communication fabric and then performs coherence operations and a memory cache lookup for the received transactions. The global ordering point of the system is distributed across the outputs of the separate coherence point pipelines. Device-ordered transactions travelling upstream toward memory are assigned sequence numbers by the sequencing logic. The transactions are speculatively issued from the communication fabric to the coherence point pipelines. Speculatively issuing the transactions to the coherence point pipelines may cause the transactions to pass through the distributed global ordering point out of order. Control logic on the downstream path reorders the transactions based on the assigned sequence numbers.

    Resource access management
    2.
    发明授权

    公开(公告)号:US11275616B2

    公开(公告)日:2022-03-15

    申请号:US16439920

    申请日:2019-06-13

    Applicant: Apple Inc.

    Abstract: A system and method for efficiently allocating resources of destinations to sources conveying requests to the destinations. In various embodiments, a computing system includes multiple sources that generate requests and multiple destinations that service the requests. One or more transaction tables store requests received from the multiple sources. Arbitration logic selects requests and stores them in a processing table. When the logic selects a given request from the processing table, and determines resources for the corresponding destination is unavailable, the logic removes the given request from the processing table and allocates the request in a retry handling queue. When the retry handling queue has no data storage for the request, logic updates a transaction table entry and maintains a count of such occurrences. When the count exceeds a threshold, the logic stalls requests for that source. Requests in the retry handling queue have priority over requests in the transaction tables.

    Real-time resource handling in resource retry queue

    公开(公告)号:US10417146B1

    公开(公告)日:2019-09-17

    申请号:US15980713

    申请日:2018-05-15

    Applicant: Apple Inc.

    Abstract: An embodiment of an apparatus includes a retry queue circuit, a transaction arbiter circuit, and a plurality of transaction buffers. The retry queue circuit may store one or more entries corresponding to one or more memory transactions. A position in the retry queue circuit of an entry of the one or more entries may correspond to a priority for processing a memory transaction corresponding to the entry. The transaction arbiter circuit may receive a real-time memory transaction from a particular transaction buffer. In response to a determination that the real-time memory transaction is unable to be processed, the transaction arbiter circuit may create an entry for the real-time memory transaction in the retry queue circuit. In response to a determination that a bulk memory transaction is scheduled for processing prior to the real-time memory transaction, the transaction arbiter circuit may upgrade the bulk memory transaction to use real-time memory resources.

    PARALLEL COHERENCE AND MEMORY CACHE PROCESSING PIPELINES

    公开(公告)号:US20200081838A1

    公开(公告)日:2020-03-12

    申请号:US16129527

    申请日:2018-09-12

    Applicant: Apple Inc.

    Abstract: Systems, apparatuses, and methods for performing coherence processing and memory cache processing in parallel are disclosed. A system includes a communication fabric and a plurality of dual-processing pipelines. Each dual-processing pipeline includes a coherence processing pipeline and a memory cache processing pipeline. The communication fabric forwards a transaction to a given dual-processing pipeline, with the communication fabric selecting the given dual-processing pipeline, from the plurality of dual-processing pipelines, based on a hash of the address of the transaction. The given dual-processing pipeline performs a duplicate tag lookup in parallel with a memory cache tag lookup for the transaction. By performing the duplicate tag lookup and the memory cache tag lookup in a parallel fashion rather than in a serial fashion, latency and power consumption are reduced while performance is enhanced.

    ESTABLISHING DEPENDENCY IN A RESOURCE RETRY QUEUE

    公开(公告)号:US20200050548A1

    公开(公告)日:2020-02-13

    申请号:US16102542

    申请日:2018-08-13

    Applicant: Apple Inc.

    Abstract: A memory cache controller includes a transaction arbiter circuit and a retry queue circuit. The transaction arbiter circuit may determine whether a received memory transaction can currently be processed by a transaction pipeline. The retry queue circuit may queue memory transactions that the transaction arbiter circuit determines cannot be processed by the transaction pipeline. In response to receiving a memory transaction that is a cache management transaction, the retry queue circuit may establish a dependency from the cache management transaction to a previously stored memory transaction in response to a determination that both the previously stored memory transaction and the cache management transaction target a common address. Based on the dependency, the retry queue circuit may initiate a retry, by the transaction pipeline, of one or more of the queued memory transactions in the retry queue circuit

    Parallel coherence and memory cache processing pipelines

    公开(公告)号:US11138111B2

    公开(公告)日:2021-10-05

    申请号:US16129527

    申请日:2018-09-12

    Applicant: Apple Inc.

    Abstract: Systems, apparatuses, and methods for performing coherence processing and memory cache processing in parallel are disclosed. A system includes a communication fabric and a plurality of dual-processing pipelines. Each dual-processing pipeline includes a coherence processing pipeline and a memory cache processing pipeline. The communication fabric forwards a transaction to a given dual-processing pipeline, with the communication fabric selecting the given dual-processing pipeline, from the plurality of dual-processing pipelines, based on a hash of the address of the transaction. The given dual-processing pipeline performs a duplicate tag lookup in parallel with a memory cache tag lookup for the transaction. By performing the duplicate tag lookup and the memory cache tag lookup in a parallel fashion rather than in a serial fashion, latency and power consumption are reduced while performance is enhanced.

Patent Agency Ranking