摘要:
A method, apparatus, and program for systematically testing the functionality of all connections in a multi-tiered bus system that connects a large number of processors. Each bus controller is instructed to send a test version of a snoop request to all of the other processors and to wait for the replies. If a connection is bad, the port associated with that connection will time out. Detection of a time-out will cause the initialization process to be halted until the problem can be isolated and resolved.
摘要:
A data processing system includes a first plane including a first plurality of processing nodes, each including multiple processing units, and a second plane including a second plurality of processing nodes, each including multiple processing units. The data processing system also includes a plurality of point-to-point first tier links. Each of the first plurality and second plurality of processing nodes includes one or more first tier links among the plurality of first tier links, where the first tier link(s) within each processing node connect a pair of processing units in the same processing node for communication. The data processing system further includes a plurality of point-to-point second tier links. At least a first of the plurality of second tier links connects processing units in different ones of the first plurality of processing nodes, at least a second of the plurality of second tier links connects processing units in different ones of the second plurality of processing nodes, and at least a third of the plurality of second tier links connects a processing unit in the first plane to a processing unit in the second plane.
摘要:
A data processing system includes a first processing node and a second processing node. The first processing node includes a plurality of first processing units coupled to each other for communication, and the second processing node includes a plurality of second processing units coupled to each other for communication. Each of the plurality of first processing units is coupled to a respective one of the plurality of second processing units in the second processing node by a respective one of a plurality of point-to-point links.
摘要:
A method, apparatus, and program for systematically testing the functionality of all connections in a multi-tiered bus system that connects a large number of processors. Each bus controller is instructed to send a test version of a snoop request to all of the other processors and to wait for the replies. If a connection is bad, the port associated with that connection will time out. Detection of a time-out will cause the initialization process to be halted until the problem can be isolated and resolved.
摘要:
A data processing system includes an interconnect fabric, a protected resource having a plurality of banks each associated with a respective one of a plurality of address sets, a snooper that controls access to the resource, one or more masters that initiate requests, and interconnect logic coupled to the one or more masters and to the interconnect fabric. The interconnect logic regulates a rate of delivery to the snooper via the interconnect fabric of requests that target any one the plurality of banks of the protected resource.
摘要:
An apparatus for scheduling transmission of a plurality of frames in a network having a plurality of nodes, each frame identified by a type designation, includes a schedule memory and a sequencer. The schedule memory stores a transmission time for each frame type and a list of frames to be transmitted. The sequencer is operable to access the schedule memory and initiate transmission of the frames in the list.
摘要:
A method and system for dynamically selecting accessible banks of memory per cycle within a banked cache memory. In accordance with the method and system of the present invention, the application of power to each bank of memory of a banked cache memory is monitored in order to determine a maximum number of selectable bank accesses per cycle such that power application to each of the banks of memory is not degraded. No more than the maximum number of selectable bank accesses per cycle are permitted for subsequent cycles from among the banks of memory, such that the number of accessible banks of memory of a banked cache memory is dynamically selectable to maximize bank accesses per cycle while maintaining an acceptable power application to each of the banks of memory.
摘要:
An apparatus for accessing a banked embedded dynamic random access memory device is disclosed. The apparatus for accessing a banked embedded dynamic random access memory (DRAM) device comprises a general functional control logic and a bank RAS controller. The general functional control logic is coupled to each bank of the banked embedded DRAM device. Coupled to the general functional control logic, the bank RAS controller includes a rotating shift register having multiple bits. Each bit within the rotating shift register corresponds to each bank of the banked embedded DRAM device. As such, a first value within a bit of the rotating shift register allows accesses to an associated bank of the banked embedded DRAM device, and a second value within a bit of the rotating shift register denies accesses to an associated bank of the banked embedded DRAM device.
摘要:
An apparatus for providing concurrent communications between multiple memory devices and a processor is disclosed. Each of the memory device includes a driver, a phase/cycle adjust sensing circuit, and a bus alignment communication logic. Each phase/cycle adjust sensing circuit detects an occurrence of a cycle adjustment from a corresponding driver within a memory device. If an occurrence of a cycle adjustment has been detected, the bus alignment communication logic communicates the occurrence of a cycle adjustment to the processor. The bus alignment communication logic also communicates the occurrence of a cycle adjustment to the bus alignment communication logic in the other memory devices. There are multiple receivers within the processor, and each of the receivers is designed to receive data from a respective driver in a memory device. Each of the receivers includes a cycle delay block. The receiver that had received the occurrence of a cycle adjustment informs the other receivers that did not receive the occurrence of a cycle adjustment to use their cycle delay block to delay the incoming data for at least one cycle.
摘要:
A method and apparatus for forwarding data in a hierarchial cache memory architecture is disclosed. A cache memory hierarchy includes multiple levels of cache memories, each level having a different size and speed. A command is initially issued from a processor to the cache memory hierarchy. If the command is a Demand Load command, data corresponding to the Demand Load command is immediately forwarded from a cache having the data to the processor. Otherwise, if the command is a Prefetch Load command, data corresponding to the Prefetch Load command is held in a cache reload buffer within a cache memory preceding the processor.