摘要:
A data processing apparatus (2) comprises a first protocol domain A configured to operate under a write progress protocol and a second protocol domain B configured to operate under a snoop progress protocol. A deadlock condition is detected if a write target address for a pending write request issued from the first domain A to the second domain B is the same as a snoop target address or a pending snoop request issued from the second domain B to the first domain A. When the deadlock condition is detected, a bridge (4) between the domains may issue an early response to a selected one of the deadlocked write and snoop requests without waiting for the selected request serviced. The early response indicates to the domain that issued the selected request that the selected request has been serviced, enabling the other request to be serviced by the issuing domain.
摘要:
A data processing apparatus (2) comprises a first protocol domain A configured to operate under a write progress protocol and a second protocol domain B configured to operate under a snoop progress protocol. A deadlock condition is detected if a write target address for a pending write request issued from the first domain A to the second domain B is the same as a snoop target address or a pending snoop request issued from the second domain B to the first domain A. When the deadlock condition is detected, a bridge (4) between the domains may issue an early response to a selected one of the deadlocked write and snoop requests without waiting for the selected request serviced. The early response indicates to the domain that issued the selected request that the selected request has been serviced, enabling the other request to be serviced by the issuing domain.
摘要:
A centralised synchronizing device for determining progress of at least a subset of transaction requests that are transmitted through a data processing system. A system synchronizing request is a request generated by one of the plurality of transaction generating devices and queries progress of a subset of the transaction requests. The synchronizing device includes: at least one port to and from the data processing system; a multicast circuitry configured to output a plurality of synchronizing requests for multicast to at least some of the devices within the data processing system where the requests query the progress of the subset of the transaction requests. Gather circuitry collects responses to the requests confirming that the queried progress has occurred at the respective devices. The gather circuitry determines when responses to all of the requests have been received and outputs a response to the system synchronizing request.
摘要:
A centralised synchronising device for determining progress of at least a subset of transaction requests that are transmitted through a data processing system in response to receipt of a system synchronising request, the data processing system having a plurality of devices including a plurality of transaction request generating devices for generating the transaction requests and a plurality of recipient devices for receiving the transaction requests, the synchronising device and at least one interconnect for interconnecting at least some of the devices; wherein the system synchronising request comprising a request generated by one of the plurality of transaction generating devices and querying progress of the at least a subset of the transaction requests; the synchronising device comprising: at least one port for receiving requests from, and outputting requests and responses to, the data processing system; multicast circuitry configured to generate a plurality of synchronising requests in response to receipt of the system synchronising request and to output the plurality of synchronising requests for multicast to at least some of the devices within the data processing system, the synchronising requests querying the progress of the at least subset of the transaction requests at each of the respective devices; gather circuitry for collecting responses to the plurality of synchronising requests the responses confirming the queried progress has occurred at the respective device, the gather circuitry being configured to determine when responses to all of the plurality of synchronising requests have been received and in response to determining that all of the responses have been received to output a response to the system synchronising request.
摘要:
An integrated circuit 2 includes a plurality of transaction sources 6, 8, 10, 12, 14, 16, 18, 20 communicating via a ring-based interconnect 30 with shared caches 22, 24 each having an associated POC/POS 30, 34 and serving as a request servicing circuit. The request servicing circuits have a set of processing resources 36 that may be allocated to different transactions. These processing resources may be allocated either dynamically or statically. Static allocation can be made in dependence upon a selection algorithm. This selection algorithm may use a quality of service value/priority level as one of its input variables. A starvation ratio may also be defined such that lower priority levels are forced to be selected if they are starved of allocation for too long. A programmable mapping may be made between quality of service values and priority levels. The maximum number of processing resources allocated to each priority level may also be programmed.
摘要:
A data processing system that manages data hazards at a coherency controller and not at an initiator device is disclosed. The data processing system process write requests in a two part form, such that a first part is transmitted and when the coherency controller has space to accept data it responds to the first part and the data and state of the data prior to the write are sent as a second part of the write request. When there are copending reads and writes to the same address the writes are stalled by the coherency controller by not responding to the first part of the write and the initiator device proceeds to process any snoop requests received to the address of the write regardless of the fact that the write is pending. When the pending read has completed the coherency controller will respond to the first part of the write and the initiator device will complete the write by sending the data and an indicator of the state of the data following the snoop. The coherency controller can then avoid any potential data hazard using this information to update memory as required.
摘要:
A data processing system that manages data hazards at a coherency controller and not at an initiator device is disclosed. Write requests are processed in a two part form, such that a first part is transmitted and when the coherency controller has space to accept data, the data and a state of the data prior to a write are sent as a second part of a write request. When there are copending reads and writes to the same address, writes are stalled by not responding to the first part of a write request and snoop requests received to the address are processed regardless of the fact that the write is pending. When the pending read has completed, the coherency controller will respond to the first part of the write request and the initiator device will complete the write by sending the data and a state indicator following the snoop.
摘要:
Initiator devices for generating transaction requests and recipient devices for receiving them are disclosed. The recipient devices accept transaction requests where there is available buffer storage for the transaction request. If there is no storage space available an acknowledgement signal generator generates and outputs a reject acknowledgement signal indicating a request has been received but has not been accepted by the recipient device. A credit generator can reserve at least one available storage location in the buffer and generate a credit grant for an initiator device that sent one of the transaction requests that was not accepted by the recipient device. The credit grant indicates to the initiator device that there is at least one reserved storage location, such that a subsequent transaction request from the initiator device will be accepted by the recipient device. Thus, the initiator device may not transmit the transaction request again until it has received the credit grant, whereupon it may transmit it along with a credit grant indicator such that it is sure that it will be accepted.
摘要:
An integrated circuit 2 includes a plurality of transaction sources 6, 8, 10, 12, 14, 16, 18, 20 communicating via a ring-based interconnect 30 with shared caches 22, 24 each having an associated POC/POS 30, 34 and serving as a request servicing circuit. The request servicing circuits have a set of processing resources 36 that may be allocated to different transactions. These processing resources may be allocated either dynamically or statically. Static allocation can be made in dependence upon a selection algorithm. This selection algorithm may use a quality of service value/priority level as one of its input variables. A starvation ratio may also be defined such that lower priority levels are forced to be selected if they are starved of allocation for too long. A programmable mapping may be made between quality of service values and priority levels. The maximum number of processing resources allocated to each priority level may also be programmed.
摘要:
An integrated circuit 2 includes a plurality of transaction sources 6, 8, 10, 12, 14, 16, 18, 20 communicating via a ring-based interconnect 30 with shared caches 22, 24 each having an associated POC/POS 30, 34 and serving as a request servicing circuit. The request servicing circuits have a set of processing resources 36 that may be allocated to different transactions. These processing resources may be allocated either dynamically or statically. Static allocation can be made in dependence upon a selection algorithm. This selection algorithm may use a quality of service value/priority level as one of its input variables. A starvation ratio may also be defined such that lower priority levels are forced to be selected if they are starved of allocation for too long. A programmable mapping may be made between quality of service values and priority levels. The maximum number of processing resources allocated to each priority level may also be programmed.