Abstract:
An apparatus and method are provided for handling ordered transactions. The apparatus has a plurality of completer elements to process transactions, a requester element to issue a sequence of ordered transactions, and an interconnect providing, for each completer element, a communication channel between that completer element and the requester element for transfer of signals between that completer element and the requester element in either direction. A given completer element that is processing a given transaction in the sequence is arranged to issue a response signal to the requester element over its associated communication channel that comprises an ordered channel indication to identify whether the associated communication channel has an ordered channel property. The ordered channel property guarantees that processing of transactions issued by the requester element over the associated communication channel in a given order will be completed by the given completer element in the same given order. The requester element is then responsive to the ordered channel indication to control timing of issuance from the requester element of at least one signal relating to one or more transactions after the given transaction in the sequence. By such an approach, the ordering flow adopted for ordered transactions can be varied by the requester element in dependence on the presence or absence of an ordered channel, whilst enabling interconnect-agnostic requester element designs to be utilised.
Abstract:
Apparatuses, methods of operating apparatuses, interconnects for connecting apparatuses to one another, and methods of operating the interconnects are disclosed. A master apparatus can issue an individual all-zero-data write transaction specifying a data storage location to the interconnect, which conveys the individual all-zero-data write transaction to a target device which writes all-zero-data at the data storage location. No write data is conveyed with the individual all-zero-data write transaction, so that the individual all-zero-data write transaction may be used to clear the data storage location without adding to congestion of a write data channel in the interconnect.
Abstract:
A method, system, and device provide for selective control in a distributed cache system of the power state of a number of receiver partitions arranged in one or more partition groups. A power control element coupled to one or more of the receiver partitions and a coherent interconnect selectively control transition from a current power state to a new power state by each receiver partition of one or more partition groups of the plurality of partition groups.
Abstract:
A technique for handling memory access requests is described. An apparatus has an interconnect for coupling a plurality of requester elements with a plurality of slave elements. The requester elements are arranged to issue memory access requests for processing by the slave elements. An intermediate element within the interconnect acts as a point of serialisation to order the memory access requests issued by requester elements via the intermediate element. The intermediate element has tracking circuitry for tracking handling of the memory access requests accepted by the intermediate element. Further, request acceptance management circuitry is provided to identify a target slave element amongst the plurality of slave elements for that given memory access request, and to determine whether the given memory access request is to be accepted by the intermediate element dependent on an indication of bandwidth capability for the target slave element.
Abstract:
A system comprises a number of master devices and an interconnect for managing coherency between the master devices. In response to a read-with-overridable-invalidate transaction received by the interconnect from a requesting master device requesting that target data associated with a target address is provided to the requesting master device, when target data associated with the target address is stored by a cache, the interconnect issues a snoop request to said cache triggering invalidation of the target data from the cache except when the interconnect or cache determines to override the invalidation and retain the target data in the cache. This enables greater efficiency in cache usage since data which the requesting master considers is unlikely to be needed again can be invalidated from caches located outside the master device itself.
Abstract:
An interconnect has coherency control circuitry for performing coherency control operations and a snoop filter for identifying which devices coupled to the interconnect have cached data from a given address. When an address is looked up in the snoop filter and misses, and there is no spare snoop filter entry available, then the snoop filter selects a victim entry corresponding to a victim address, and issues an invalidate transaction for invalidating locally cached copies of the data identified by the victim. The coherency control circuitry for performing coherency checking operations for data access transactions is reused for performing coherency control operations for the invalidate transaction issued by the snoop filter. This greatly reduces the circuitry complexity of the snoop filter.
Abstract:
A request node is provided comprising request circuitry to issue write requests to write data to storage circuitry. The write requests are issued to the storage circuitry via a coherency node. Status receiving circuitry receives a write status regarding write operations at the storage circuitry from the coherency node and throttle circuitry throttles a rate at which the write requests are issued to the storage circuitry in dependence on the write status. A coherency node is also provided, comprising access circuitry to receive a write request from a request node to write data to storage circuitry and to access the storage circuitry to write the data to the storage circuitry. Receive circuitry receives, from the storage circuitry, an incoming write status regarding write operations at the storage circuitry and transmit circuitry transmits an outgoing write status to the request node based on the incoming write status.
Abstract:
An apparatus and method are provided for processing ownership upgrade requests in relation to cached data. The apparatus has a plurality of processing units, at least some of which have associated cache storage. A coherent interconnect couples the plurality of master units with memory, the coherent interconnect having a snoop unit used to implement a cache coherency protocol when a request received by the coherent interconnect identifies a cacheable memory address within the memory. Contention management circuitry is provided to control contended access to a memory address by two or more processing units within the plurality of processing units. The coherent interconnect may receive, from a first processing unit having an associated cache storage, an ownership upgrade request specifying a target memory address, the ownership upgrade request indicating that a copy of data at the target memory address, as held in a shared state in the first processing unit's associated cache storage at a time the ownership upgrade request was issued, is required to have its state changed from the shared state to a unique state prior to the first processing circuitry performing a write operation to the data. The coherent interconnect is arranged to process the ownership upgrade request by referencing the snoop unit in order to determine whether the first processing unit's associated cache storage is identified as still holding a copy of the data at the target memory address at a time the ownership upgrade request is processed. In that event, a pass condition is identified for the ownership upgrade request independent of information held by the contention management circuitry for the target memory address.
Abstract:
A data processing system, having two or more of processors that access a shared data resource, and method of operation thereof. Data stored in a local cache is marked as being in a ‘UniqueDirty’, ‘SharedDirty’, ‘UniqueClean’, ‘SharedClean’ or ‘Invalid’ state. A snoop filter monitors access by the processors to the shared data resource, and includes snoop filter control logic and a snoop filter cache configured to maintain cache coherency. The snoop filter cache does not identify any local cache that stores the block of data in a ‘SharedDirty’ state, resulting in a smaller snoop filter cache size and simple snoop control logic. The data processing system by be defined by instructions of a Hardware Description Language.
Abstract:
Aspects of the present disclosure relate to an interconnect comprising interfaces to communicate with respective requester and receiver node devices, and home nodes. Each home node is configured to: receive requests from one or more requester nodes, each request comprising a target address corresponding to a target receiver nodes; and transmit each said request to the corresponding target receiver node. Mapping circuitry is configured to: associate each of said plurality of home nodes with a given home node cluster; perform a first hashing of the target address of a given request, to determine a target cluster; perform a second hashing of the target address, to determine a target home node within said target cluster; and direct the given message, to the target home node.