摘要:
A data processing system includes a first processing node and a second processing node coupled by an interconnect fabric. The first processing node includes a plurality of first processing units coupled to each other for communication, and the second processing node includes a plurality of second processing units coupled to each other for communication. The first processing units in the first processing node have a first mode in which the first processing units broadcast operations with a first scope limited to the first processing node and a second mode in which the first processing units of the first processing node broadcast operations with a second scope including the first processing node and the second processing node.
摘要:
A data processing system includes a first processing node and a second processing node coupled by an interconnect fabric. The first processing node includes a plurality of first processing units coupled to each other for communication, and the second processing node includes a plurality of second processing units coupled to each other for communication. A first processing unit in the first processing node includes interconnect logic that processes a plurality of concurrently pending broadcast operations of differing broadcast scope. At least a first of the plurality of concurrently pending broadcast operations has a first scope limited to the first processing node, and at least a second of the plurality of concurrently pending broadcast operations has a second scope including the first processing node and the second processing node.
摘要:
A method, apparatus, and computer instructions are provided by the present invention to automatically recover from a failed node concurrent maintenance operation. A control logic is provided to send a first test command to processors of a new node. If the first test command is successful, a second test command is sent to all processors or to the remaining nodes if nodes are removed. If the second command is successful, system operation is resumed with the newly configured topology with either nodes added or removed.If the response is incorrect or a timeout has occurred, the control logic restores values to the current mode register and sends a third test command to check for an error. A fatal system attention is sent to a service processor or system software if an error is encountered. If no error, system operation is resumed with previously configured topology.
摘要:
A data processing system includes a first processing node and a second processing node. The first processing node includes a plurality of first processing units coupled to each other for communication, and the second processing node includes a plurality of second processing units coupled to each other for communication. Each of the plurality of first processing units is coupled to a respective one of the plurality of second processing units in the second processing node by a respective one of a plurality of point-to-point links.
摘要:
A distributed system structure for a large-way, symmetric multiprocessor system using a bus-based cache-coherence protocol is provided. The distributed system structure contains an address switch, multiple memory subsystems, and multiple master devices, either processors, I/O agents, or coherent memory adapters, organized into a set of nodes supported by a node controller. The node controller receives commands from a master device, communicates with a master device as another master device or as a slave device, and queues commands received from a master device. Due to pin limitations that may be caused by large buses, e.g. buses that support a high number of data pins, the node controller may be implemented such that the functionality for its address paths and data paths are implemented in physically separate components, chips, or circuitry, such as a node data controller or a node address controller. In this case, commands may be sent from the node address controller to the node data controller to control the flow of data through a node.
摘要:
A method and system for high speed data access of a banked cache memory. In accordance with the method and system of the present invention, during a first cycle, in response to receipt of a request address at an access controller, the request address is speculatively transmitted to a banked cache memory, where the speculative transmission has at least one cycle of latency. Concurrently, the request address is snooped in a directory associated with the banked cache memory. Thereafter, during a second cycle the speculatively transmitted request address is distributed to each of multiple banks of memory within the banked cache memory. In addition, the banked cache memory is provided with a bank indication indicating which bank of memory among the multiple banks of memory contains the request address, in response to a bank hit from snooping the directory. Thereafter, data associated with the request address is output from the banked cache memory, in response to the bank indication, such that access time to a high latency remote banked cache memory is minimized.
摘要:
A communications system includes a plurality of network nodes including a first node having an interface unit. The interface unit includes a scheduler operable to schedule periodic transmission of a plurality of frames at respective designated times. Each of the plurality of frames includes a plurality of slots. The interface unit also includes an access system operable to designate which of the plurality of slots of a frame are accessible by each of the plurality of nodes and to access the designated slots.
摘要:
A multi-chip processor apparatus includes multiple processor chips on a substrate. At least one of the multiple processor chips includes a die with a primary interconnect trunk that communicates information between multiple compute elements situated along the primary interconnect trunk. That multiple processor chip includes a secondary interconnected trunk that may be oriented perpendicular with respect to the primary interconnect trunk. The secondary interconnect trunk communicates information off-chip via a number of I/O interfaces at the perimeter of that multiple processor chip. The secondary interconnect trunk intersects the primary interconnect trunk at an intersection at which a bus control element is located. The bus control element includes a primary trunk interface that couples to the primary interconnect trunk at the intersection to enable the bus control element to control on-chip communication among the compute elements via coherency signals on the primary interconnect trunk. The bus control element includes a secondary trunk interface coupled to the secondary interconnect trunk.
摘要:
A symmetric multi-processing (SMP) processor includes a primary interconnect trunk for communication of information between multiple compute elements situated along the primary interconnect trunk. The processor also includes a secondary interconnected trunk that may be oriented perpendicular with respect to the primary interconnect trunk. The secondary interconnect trunk communicates information off-chip via a number of I/O interfaces at the perimeter of the processor chip. The I/O interfaces may be distributed uniformly along portions of the perimeter.
摘要翻译:对称多处理(SMP)处理器包括用于在沿着主互连干线位于多个计算单元之间进行信息通信的主互连干线。 处理器还包括可相对于主互连干线垂直定向的次级互连干线。 辅助互连中继线通过处理器芯片周边的多个I / O接口来传送芯片外的信息。 I / O接口可以沿周边的部分均匀分布。
摘要:
A data processing system includes an interconnect fabric, a protected resource having a plurality of banks each associated with a respective one of a plurality of address sets, a snooper that controls access to the resource, one or more masters that initiate requests, and interconnect logic coupled to the one or more masters and to the interconnect fabric. The interconnect logic regulates a rate of delivery to the snooper via the interconnect fabric of requests that target any one the plurality of banks of the protected resource.