摘要:
A computer system of a number of processing nodes operate either in a loop configuration or off of a common bus with high speed, high performance wide bandwidth characteristics. The processing nodes in the system are also interconnected by a separate narrow bandwidth, low frequency recovery bus. When problems arise with node operations on the high speed bus, operations are transferred to the low frequency recovery bus and continue there at a slower rate for recovery operations. The recovery technique may be used to increase system speed and performance on a dynamic basis.
摘要:
The data interconnect and routing mechanism reduces data communication latency, supports dynamic route determination based upon processor activity level/traffic, and implements an architecture that supports scalable improvements in communication frequencies. In one application, a data processing system includes first and second processing books, each including at least first and second processing units. Each of the first and second processing units has a respective first output data bus. The first output data bus of the first processing unit is coupled to the second processing unit, and the first output data bus of the second processing unit is coupled to the first processing unit. At least the first processing unit of the first processing book and the second processing unit of the second processing book each have a respective second output data bus. The second output data bus of the first processing unit of the first processing book is coupled to the first processing unit of the second processor book, and the second output data bus of the second processing unit of the second processor book is coupled to the second processing unit of the first processor book.
摘要:
A processor book designed to support both commercial workloads and technical workloads based on a dynamic or static mechanism of reconfiguring the external wiring interconnect. The processor book is configured as a building block for commercial workload processing systems with external connector buses (ECBs). The processor book is also provided with routing logic to enable to ECBs to be utilized for either book-to-book routing or routing within the same processor book. A table specific wiring scheme is provided for coupling the ECBs running off the chips of one MCM to the chips of the second MCM on the processor book so that the chips of the first MCM are connected directly to the chips of a second MCM that is logically furthest away and vice versa. Once the wiring of the ECBs are completed according to the wiring scheme, the operational and functional characteristics reflect those of a processor book configured for technical workloads.
摘要:
A processor book designed to support both commercial workloads and technical workloads based on a dynamic or static mechanism of reconfiguring the external wiring interconnect. The processor book is configured as a building block for commercial workload processing systems with external connector buses (ECBs). The processor book is also provided with routing logic to enable to ECBs to be utilized for either book-to-book routing or routing within the same processor book. A table specific wiring scheme is provided for coupling the ECBs running off the chips of one MCM to the chips of the second MCM on the processor book so that the chips of the first MCM are connected directly to the chips of a second MCM that is logically furthest away and vice versa. Once the wiring of the ECBs are completed according to the wiring scheme, the operational and functional characteristics reflect those of a processor book configured for technical workloads.
摘要:
A data interconnect and routing mechanism reduces data communication latency, supports dynamic route determination based upon processor activity level/traffic, and implements an architecture that supports scalable improvements in communication frequencies. In one implementation, a data processing system includes at least first through third processing units, data storage coupled to the plurality of processing units, and an interconnect fabric. The interconnect fabric includes at least a first data bus coupling the first processing unit to the second processing unit and a second data bus coupling the third processing unit to the second processing unit so that the first and third processing units can transmit data traffic to the second processing unit. The data processing system further includes a control channel coupling the first and third processing units. The first processing unit requests approval from the third processing unit via the control channel to transmit a data communication to the second processing unit, and the third processing unit approves or delays transmission of the data communication in a response transmitted via the control channel.
摘要:
A data processing system includes an interconnect and first and second nodes, coupled to the interconnect, that each include at least one agent. Each agent within the first and second nodes outputs a snoop response in response to snooping a transaction on the interconnect. Utilizing the snoop response of each agent within the first node, first response logic within the first node produces a first cumulative combined response. This first cumulative combined response is then combined by second response logic in the second node with the snoop response of each agent in the second node to produce a second cumulative combined response. After a complete combined response is obtained in this manner, the complete combined response is distributed to all nodes so that each agent can determine its response, if any, to the transaction.
摘要:
A data processing system includes a plurality of nodes, which each contain at least one agent and each have an associated node identifier, and memory distributed among the plurality of nodes. The data processing system further includes an interconnect containing a segmented data channel, where each node contains a segment of the segmented data channel and each segment is coupled to at least one other segment by destination logic. In response to snooping a write request of a master agent on the interconnect, a target agent that will service the write request places its node identifier in a snoop response. When the master agent receives the combined response, which contains the node identifier of the target agent, the master agent issues on the segmented data channel a write data transaction specifying the node identifier of the target agent as a destination identifier. In response to receipt of the write data transaction, the destination logic transmits the write data transaction to a next segment only if the destination identifier does not match a node identifier associated with a node containing a current segment.
摘要:
A data processing system includes an interconnect, a plurality of nodes coupled to the interconnect that each include at least one agent, response logic within each node, and a queue. In response to snooping a transaction on the interconnect, each agent outputs a snoop response. In addition, the queue, which has an associated agent, allocates an entry to service the transaction. The response logic within each node accumulates a partial combined response of its node and any preceding node until a complete combined response for all of the plurality of nodes is obtained. However, prior to the associated agent receiving the complete combined response, the queue speculatively deallocates the entry if the partial combined response indicates that an agent other than the associated agent will service the transaction.
摘要:
The symmetric multiprocessor system includes multiple processing nodes, with multiple agents at each node, connected to each other via an interconnect. A request transaction is initiated by a master agent in a master node to all receiving nodes. A write counter number is generated for associating with the request transaction. The master agent then waits for a combined response from the receiving nodes. After the receipt of the combined response, a data packet is sent from the master agent to all intended one of the receiving nodes according to the combined response. After the data packet has been sent, the master agent in the master node is ready to send another request transaction along with a new write counter number, without the necessity of waiting for an acknowledgement from the receiving node.
摘要:
A data processing system includes a plurality of nodes, which each contain at least one agent, and data storage accessible to agents within the nodes. The plurality of nodes are coupled by a non-hierarchical interconnect including multiple non-blocking uni-directional address channels and at least one uni-directional data channel. The agents, which are each coupled to and snoop transactions on all of the plurality of address channels, can only issue transactions on an associated address channel. The uni-directional channels employed by the present non-hierarchical interconnect architecture permit high frequency pumped operation not possible with conventional bi-directional shared system buses. In addition, access latencies to remote (cache or main) memory incurred following local cache misses are greatly reduced as compared with conventional hierarchical systems because of the absence of inter-level (e.g., bus acquisition) communication latency. The non-hierarchical interconnect architecture also permits design flexibility in that the segment of the interconnect within each node can be independently implemented by a set of buses or as a switch, depending upon cost and performance considerations.