Abstract:
Data processing apparatus comprises a data access requesting node; data access circuitry to receive a data access request from the data access requesting node and to route the data access request for fulfilment by one or more data storage nodes selected from a group of two or more data storage nodes; and indication circuitry to provide a source indication to the data access requesting node, to indicate an attribute of the one or more data storage nodes which fulfilled the data access request; the data access requesting node being configured to vary its operation in response to the source indication.
Abstract:
Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.
Abstract:
A device for selecting requests to be serviced in a data processing apparatus has an arbitration stage for selecting an arbitrated request from a plurality of candidate requests and a hazard detection stage for performing hazard detection to predict whether the arbitrated request selected by the arbitration stage meets a hazard condition. If the arbitrated request meets the hazard condition, the hazard detection stage returns the arbitration request to the arbitration stage for a later arbitration and sets a hazard indication for the returned request. Also, the hazard detection stage controls at least one other arbitration request to be returned if it conflicts with a candidate request having the hazard indication set. This approach prevents denial of service to requests that were hazarded.
Abstract:
Interconnect systems and method of operating such are disclosed. A plurality of nodes coupled via a packet transport path form an interconnect and the nodes provide ingress points to the interconnect for a plurality of packet sources. A central controller holds permitted rate indications for each of the plurality of packet sources, in accordance with which each packet source sends packets via the interconnect. The nodes each respond to packet collision event at that node by sending a collision report to the central controller. In response the central controller, in respect of a collision pair of packet sources associated with the packet collision, decreases the permitted rate indication corresponding to a packet source of the collision pair of packet sources which currently has the higher permitted rate indication. Periodically the permitted rate indications of all of the packet sources are increased, subject to a maximum permitted rate indication for each.
Abstract:
An apparatus to is provided, to be used with an interconnect comprising a home node. The apparatus includes general-purpose storage circuitry and specialised storage circuitry. Transfer circuitry performs a non-forwardable transfer of a data item from the general-purpose storage circuitry to the specialised storage circuitry. Transmit circuitry transmits an offer to the home node, at a time of the non-forwardable transfer, to transfer the data item to the home node. The apparatus is inhibited from forwarding the data item from the specialised storage circuitry to the home node.
Abstract:
A requester issues a request specifying a target address indicating an addressed location in a memory system. A completer responds to the request. Tag error checking circuitry performs a tag error checking operation when the request issued by the requester is a tag-error-checking request specifying an address tag. The tag error checking operation comprises determining whether the address tag matches an allocation tag stored in the memory system associated with a block of one or more addresses comprising the target address specified by the tag-error-checking request. The requester and the completer communicate via a memory interface having at least one data signal path to exchange read data or write data between the requester and the completer; and at least one tag signal path, provided in parallel with the at least one data signal path, to exchange address tags or allocation tags between the requester and the completer.
Abstract:
An interconnect apparatus comprises first node circuitry for performing first node operations to service data access requests in respect of a first range of memory addresses and second node circuitry for performing second node operations to service data access requests in respect of a second range of memory addresses. The interconnect comprises interface circuitry to: receive a retry indication in respect of a data access request from the first node and forward the retry indication to the requester circuitry; responsive to determining that the interface circuitry has capacity for the data access request, transmit a reissue capacity message to the requester circuitry; receive a reissued data access request from the requester circuitry; and issue the reissued data access request to the second node circuitry. The second node circuitry is responsive to receiving the reissued data access request to service the data access request.
Abstract:
An apparatus for handling resets corresponding to multiple reset domains comprises a transport network interconnecting elements to enable data to be transferred from one element to another, ingress circuitry to couple elements to the transport network, and egress circuitry to couple the transport network to the elements. The ingress circuitry couples source elements to the transport network, and is responsive to receiving data from a source element to generate at least one transport packet in order to send that data over the transport network. Each transport packet comprises a reset domain indicator indicative of the reset domain in which the source element operates. The egress circuitry couples the transport network to destination elements and, whilst a reset of a particular reset domain is asserted, discards transport packets for which the reset domain indicator indicates the particular reset domain.
Abstract:
An apparatus and method are provided for managing a cache. The cache is arranged to comprise a plurality of cache sections, where each cache section is powered independently of the other cache sections in the plurality of cache sections, and the apparatus has power control circuitry to control power to each of the cache sections. The power control circuitry is responsive to a trigger condition indicative of an ability to operate the cache in a power saving mode, to perform a latency evaluation process to determine a latency indication for each of the cache sections, and to control which of a subset of the cache sections to power off in dependence on the latency indication. This can allow the power consumption savings realised by turning off one or more cache sections to be optimised to take into account the current system state.
Abstract:
There is provided a data processing apparatus comprising: processing circuitry to speculatively execute an instruction referencing a virtual address. Lookup circuitry receives the virtual address from the processing circuitry. The lookup circuitry comprises storage circuitry to store at least one virtual address and page walking circuitry to perform a page walk on further storage circuitry, in dependence on the virtual address being unlisted by the storage circuitry, to determine whether a correspondence between a physical address and the virtual address exists. The lookup circuitry signals an error when the correspondence cannot be found and, in response to the error being signaled, the storage circuitry stores an entry comprising the virtual address.