摘要:
A computer system of a number of processing nodes operate either in a loop configuration or off of a common bus with high speed, high performance wide bandwidth characteristics. The processing nodes in the system are also interconnected by a separate narrow bandwidth, low frequency recovery bus. When problems arise with node operations on the high speed bus, operations are transferred to the low frequency recovery bus and continue there at a slower rate for recovery operations. The recovery technique may be used to increase system speed and performance on a dynamic basis.
摘要:
The data interconnect and routing mechanism reduces data communication latency, supports dynamic route determination based upon processor activity level/traffic, and implements an architecture that supports scalable improvements in communication frequencies. In one application, a data processing system includes first and second processing books, each including at least first and second processing units. Each of the first and second processing units has a respective first output data bus. The first output data bus of the first processing unit is coupled to the second processing unit, and the first output data bus of the second processing unit is coupled to the first processing unit. At least the first processing unit of the first processing book and the second processing unit of the second processing book each have a respective second output data bus. The second output data bus of the first processing unit of the first processing book is coupled to the first processing unit of the second processor book, and the second output data bus of the second processing unit of the second processor book is coupled to the second processing unit of the first processor book.
摘要:
A processor book designed to support both commercial workloads and technical workloads based on a dynamic or static mechanism of reconfiguring the external wiring interconnect. The processor book is configured as a building block for commercial workload processing systems with external connector buses (ECBs). The processor book is also provided with routing logic to enable to ECBs to be utilized for either book-to-book routing or routing within the same processor book. A table specific wiring scheme is provided for coupling the ECBs running off the chips of one MCM to the chips of the second MCM on the processor book so that the chips of the first MCM are connected directly to the chips of a second MCM that is logically furthest away and vice versa. Once the wiring of the ECBs are completed according to the wiring scheme, the operational and functional characteristics reflect those of a processor book configured for technical workloads.
摘要:
A processor book designed to support both commercial workloads and technical workloads based on a dynamic or static mechanism of reconfiguring the external wiring interconnect. The processor book is configured as a building block for commercial workload processing systems with external connector buses (ECBs). The processor book is also provided with routing logic to enable to ECBs to be utilized for either book-to-book routing or routing within the same processor book. A table specific wiring scheme is provided for coupling the ECBs running off the chips of one MCM to the chips of the second MCM on the processor book so that the chips of the first MCM are connected directly to the chips of a second MCM that is logically furthest away and vice versa. Once the wiring of the ECBs are completed according to the wiring scheme, the operational and functional characteristics reflect those of a processor book configured for technical workloads.
摘要:
A data interconnect and routing mechanism reduces data communication latency, supports dynamic route determination based upon processor activity level/traffic, and implements an architecture that supports scalable improvements in communication frequencies. In one implementation, a data processing system includes at least first through third processing units, data storage coupled to the plurality of processing units, and an interconnect fabric. The interconnect fabric includes at least a first data bus coupling the first processing unit to the second processing unit and a second data bus coupling the third processing unit to the second processing unit so that the first and third processing units can transmit data traffic to the second processing unit. The data processing system further includes a control channel coupling the first and third processing units. The first processing unit requests approval from the third processing unit via the control channel to transmit a data communication to the second processing unit, and the third processing unit approves or delays transmission of the data communication in a response transmitted via the control channel.
摘要:
A method for informing a processor of a selected order of transmission of data to the processor. The method comprises the steps of coupling system components via a data bus to the processor to effectuate data transfer, determining at the system component logic the order in which to transmit data to the processor, and issuing to the data bus a selected order bit concurrent with the data, wherein the selected order bit alerts the processor of the order and the data is transmitted in that order. In a preferred embodiment, the system component is the cache and the method may involve receiving at the cache a preference of ordering for a read address/request from the processor. The preference order logic of the cache controller or a preference order logic component evaluates the preference of ordering desired by comparing the processor preference with other preferences, including cache order preference. One preference order is selected and the data is then retrieved from a cache line of the cache in the order selected.
摘要:
A method for preferentially ordering the retrieval of data from a system component, such as a cache line of a cache. The method includes the steps of first encoding a set of bits with a processor-preferred order of data retrieval. In the cache embodiment, the set of bits is then sent along with the read request via the address bus to the cache. A modified cache controller having preference order logic or a preference order logic component interprets the set of bits and directs the retrieval of the requested data from the cache line according to the preferred order. In one embodiment, a hierarchial preference order is utilized. The preference order logic attempts to retrieve the data according to the highest preference order. If that preference order cannot be utilized, due to other considerations, the next highest preference order is attempted.
摘要:
A method for informing a processor of a selected order of transmission of data to the processor. The method comprises the steps of coupling system components via a data bus to the processor to effectuate data transfer, determining at the system component logic the order in which to transmit data to the processor, and issuing to the data bus a selected order bit concurrent with the data, wherein the selected order bit alerts the processor of the order and the data is transmitted in that order. In a preferred embodiment, the system component is the cache and the method may involve receiving at the cache a preference of ordering for a read address/request from the processor. The preference order logic of the cache controller or a preference order logic component evaluates the preference of ordering desired by comparing the processor preference with other preferences, including cache order preference. One preference order is selected and the data is then retrieved from a cache line of the cache in the order selected.
摘要:
A method for selecting an order of data transmittal based on system bus utilization of a data processing system. The method comprises the steps of coupling system components to a processor within the data processing system to effectuate data transfer, dynamically determining based on current system bus loading, an order in which to retrieve and transmit data from the system component to the processor, and informing the processor of the order selected by issuing to the data bus a plurality of selected order bits concurrent with the transmittal of the data, wherein the selected order bit alerts the processor of the order and the data is transmitted in that order. In a preferred embodiment, the system component is a cache and a system monitor monitors the system bus usage/loading. When a read request appears at the cache, the modified cache controller preference order logic or a preference order logic component determines the order to transmit the data wherein the order is selected to substantially optimize data bandwidth when the system bus usage is high and selected to substantially optimize data latency when system bus usage is low.
摘要:
A method for preferentially ordering the retrieval of data from a cache line of a cache within a vertical cache configuration. The method includes the steps of first encoding a set of bits with a processor-preferred order of data retrieval based on the cache configuration. The set of bits is then sent along with the read request via the address bus to the first cache. The cache directory is check to see if a “hit” occurs (i.e., the data is present in that cache). If the data is present, a modified cache controller having preference order logic or a preference order logic component interprets the set of bits and directs the retrieval of the requested data from the cache line according to the preferred order for that cache. If no hit (i.e., a miss) occurs, the read request and the preferred order set of bits are sent to the next level cache. In one embodiment, a single set of bits is utilized. The preference order logic encodes the set of bits with the preference order of the next level cache when a miss occurs, prior to sending the read request and the set of bits to the next level cache. When all levels of cache result in a miss, the read request is sent over the system bus with the preference order set of bits being encoded for the system wide preference.