I/O coherent request node for data processing network with improved handling of write operations

    公开(公告)号:US11119961B2

    公开(公告)日:2021-09-14

    申请号:US16655403

    申请日:2019-10-17

    Applicant: Arm Limited

    Abstract: A method and apparatus for data transfer in a data processing network uses both ordered and optimized write requests. A first write request is received at a first node of the data processing network is directed to a first address and has a first stream identifier. The first node determines if any previous write request with the same first stream identifier is pending. When a previous write request is pending, a request for an ordered write is sent to a Home Node of the data processing network associated with the first address. When no previous write request to the first stream identifier is pending, a request for an optimized write is sent to the Home Node. The Home Node and first node are configured to complete a sequence of ordered write requests before the associated data is made available to other elements of the data processing network.

    Apparatus and method for processing an ownership upgrade request for cached data that is issued in relation to a conditional store operation

    公开(公告)号:US10761987B2

    公开(公告)日:2020-09-01

    申请号:US16202171

    申请日:2018-11-28

    Applicant: Arm Limited

    Abstract: An apparatus and method are provided for processing ownership upgrade requests in relation to cached data. The apparatus has a plurality of processing units, at least some of which have associated cache storage. A coherent interconnect couples the plurality of master units with memory, the coherent interconnect having a snoop unit used to implement a cache coherency protocol when a request received by the coherent interconnect identifies a cacheable memory address within the memory. Contention management circuitry is provided to control contended access to a memory address by two or more processing units within the plurality of processing units. The coherent interconnect may receive, from a first processing unit having an associated cache storage, an ownership upgrade request specifying a target memory address, the ownership upgrade request indicating that a copy of data at the target memory address, as held in a shared state in the first processing unit's associated cache storage at a time the ownership upgrade request was issued, is required to have its state changed from the shared state to a unique state prior to the first processing circuitry performing a write operation to the data. The coherent interconnect is arranged to process the ownership upgrade request by referencing the snoop unit in order to determine whether the first processing unit's associated cache storage is identified as still holding a copy of the data at the target memory address at a time the ownership upgrade request is processed. In that event, a pass condition is identified for the ownership upgrade request independent of information held by the contention management circuitry for the target memory address.

    Apparatus and method for managing snoop operations

    公开(公告)号:US10657055B1

    公开(公告)日:2020-05-19

    申请号:US16218962

    申请日:2018-12-13

    Applicant: Arm Limited

    Abstract: An apparatus and method are provided for managing snoop operations. The apparatus has an interface for receiving access requests from any of N master devices that have associated cache storage, each access request specifying a memory address within memory associated with the apparatus. Snoop filter storage is provided that has a plurality of snoop filter entries, where each snoop filter entry identifies a memory portion and snoop control information indicative of the master devices that have accessed that memory portion. When an access request received at the interface specifies a memory address that is within the memory portion associated with a snoop filter entry, snoop control circuitry uses the snoop control information in that snoop filter entry to determine which master devices to subject to a snoop operation. The snoop control circuitry maintains master indication data used to identify a first subset of the plurality of master devices whose accesses to the memory are to be precisely tracked within the snoop filter storage. The first subset comprises up to M master devices, where M is less than N. Each snoop filter entry has a precise tracking field and an imprecise tracking field. When multiple master devices have accessed the memory portion associated with a snoop filter entry, then the precise tracking field is used to precisely identify each master device of those multiple master devices that is within the first subset. When the multiple master devices includes at least one master device that is not in the first subset, then a generic indication is set in the imprecise tracking field.

    Snoop filter for cache coherency in a data processing system

    公开(公告)号:US10157133B2

    公开(公告)日:2018-12-18

    申请号:US14965131

    申请日:2015-12-10

    Applicant: ARM Limited

    Abstract: A data processing system, having two or more of processors that access a shared data resource, and method of operation thereof. Data stored in a local cache is marked as being in a ‘UniqueDirty’, ‘SharedDirty’, ‘UniqueClean’, ‘SharedClean’ or ‘Invalid’ state. A snoop filter monitors access by the processors to the shared data resource, and includes snoop filter control logic and a snoop filter cache configured to maintain cache coherency. The snoop filter cache does not identify any local cache that stores the block of data in a ‘SharedDirty’ state, resulting in a smaller snoop filter cache size and simple snoop control logic. The data processing system by be defined by instructions of a Hardware Description Language.

    Method and apparatus for efficient chip-to-chip data transfer

    公开(公告)号:US12079132B2

    公开(公告)日:2024-09-03

    申请号:US18101806

    申请日:2023-01-26

    Applicant: Arm Limited

    CPC classification number: G06F12/0888 G06F2212/1024

    Abstract: Data transfer between caching domains of a data processing system is achieved by a local coherency node (LCN) of a first caching domain receiving a read request for data associated with a second caching domain, from a requesting node of the first caching domain. The LCN requests the data from the second caching domain via a transfer agent. In response to receiving a cache line containing the data from the second caching domain, the transfer agent sends the cache line to the requesting node, bypassing the LCN and, optionally, sends a read-receipt indicating the state of the cache line to the LCN. The LCN updates a coherency state for the cache line in response to receiving the read-receipt from the transfer agent and a completion acknowledgement from the requesting node. Optionally, the transfer agent may send the cache line via the LCN when congestion is detected in a response channel of the data processing system.

    Cache for storing coherent and non-coherent data

    公开(公告)号:US11599467B2

    公开(公告)日:2023-03-07

    申请号:US17331806

    申请日:2021-05-27

    Applicant: Arm Limited

    Abstract: The present disclosure advantageously provides a system cache and a method for storing coherent data and non-coherent data in a system cache. A transaction is received from a source in a system, the transaction including at least a memory address, the source having a location in a coherent domain or a non-coherent domain of the system, the coherent domain including shareable data and the non-coherent domain including non-shareable data. Whether the memory address is stored in a cache line is determined, and, when the memory address is not determined to be stored in a cache line, a cache line is allocated to the transaction including setting a state bit of the allocated cache line based on the source location to indicate whether shareable or non-shareable data is stored in the allocated cache line, and the transaction is processed.

Patent Agency Ranking