摘要:
A method for avoiding data loss due to cancelled transactions within a non-uniform memory access (NUMA) data processing system is disclosed. A NUMA data processing system includes a node interconnect to which at least a first node and a second node are coupled. The first and the second nodes each includes a local interconnect, a system memory coupled to the local interconnect, and a node controller interposed between the local interconnect and a node interconnect. The node controller detects certain situations which, due to the nature of a NUMA data processing system, can lead to data loss. These situations share the common feature that a node controller ends up with the only copy of a modified cache line and the original transaction that requested the modified cache line may not be issued again with the same tag or may not be issued again at all. The node controller corrects these situations by issuing its own write transaction to the system memory for that modified cache line using its own tag, and then providing the data the modified cache line is holding. This ensures that the modified data will be written to the system memory.
摘要:
A non-uniform memory access (NUMA) data processing system includes a node interconnect to which at least a first processing node and a second processing node are coupled. The first and the second processing nodes each include a local interconnect, a processor coupled to the local interconnect, a system memory coupled to the local interconnect, and a node controller interposed between the local interconnect and the node interconnect. In order to reduce communication latency, the node controller of the first processing node speculatively transmits request transactions received from the local interconnect of the first processing node to the second processing node via the node interconnect. In one embodiment, the node controller of the first processing node subsequently transmits a status signal to the node controller of the second processing node in order to indicate how the request transaction should be processed at the second processing node.
摘要:
A queue includes a data multiplexer having an output and at least two inputs and a plurality of data latches. The data latches include at least a first data latch and a second data latch, which each have a data input and a data output. The data output of the first data latch is coupled to a first input of the data multiplexer, and the output of the data multiplexer is coupled to the data input of the second data latch. A data value to be stored in the queue is received at a second input to the data multiplexer. In response to one or more control signals, the data value is latched into at least one of the first and second data latches, thereby storing the data value in the queue. Depending upon the design of the control logic, the queue can implement either first in, first out (FIFO) or last in, first out (LIFO) behavior.
摘要:
A non-uniform memory access (NUMA) computer system includes a node interconnect and a plurality of processing nodes that each contain at least one processor, a local interconnect, a local system memory, and a node controller coupled to both a respective local interconnect and the node interconnect. According to the method of the present invention, a communication transaction is transmitted on the node interconnect from a local processing node to a remote processing node. In response to receipt of the communication transaction by the remote processing node, a response including a coherency response field is transmitted on the node interconnect from the remote processing node to the local processing node. In response to receipt of the response at the local processing node, a request is issued on the local interconnect of the local processing node concurrently with a determination of a coherency response indicated by the coherency response field.
摘要:
A method and apparatus for detecting and identifying the attributes of level-2 (L2) memory cache modules in a computer system. An ID Module is attached to each L2 cache memory module containing memory attribute information such as size, presence or absence of parity, synchronous or asynchronous access ability, electrical timing, etc. The information is accessible using a parallel or serial interface.
摘要:
A system and a method are provided for implementing a power-saving sleep mode in a synchronous circuit core having multiple clock domains including primary and secondary clock domains. The primary clock domain has states of awake, asleep, doze, and waking. The doze and waking states are transient states between the awake and asleep states. One or more secondary clock domains each have states of secondary awake and secondary asleep. The doze and waking states are used to eliminate race conditions between the primary and secondary clock domains. If the core has two or more secondary clock domains, the secondary clock domains each have an additional state of sleep-pending. The sleep-pending state is a transient state between the secondary awake and secondary asleep states. One or more synchronization logics are coupled between the primary and secondary clock domains.
摘要:
An electronic system is disclosed, including multiple initiators and one or more targets coupled to a bus, and a request mask control unit (RMCU). The initiators are configured to initiate requests (e.g., read requests and write requests) via the bus, and the targets are configured to receive requests from the initiators via the bus. The targets are also configured to produce multiple MaskEnable signals, wherein each of the MaskEnable signals is generated following an initial request received via the bus, and dependent on a corresponding “masking situation” within the target. The RMCU receives the MaskEnable signals and produces multiple RequestMask signals dependent upon the MaskEnable signals. One or more of the initiators are permitted to repeat requests via the bus dependent upon one or more of the RequestMask signals. This mechanism provides additional bus bandwidth for carrying out successful data transfers.
摘要:
A method, system, and apparatus for maintaining the contents of a self-refreshable memory device during periods of data processing system reset is provided. In one embodiment, a refresh controller receives an indication that the data processing system is being reset. If necessary, the refresh controller modifies the signal from a memory controller to the memory device such that the memory device is placed in a self-refresh mode. The refresh controller keeps the memory device in the self-refresh mode until the data processing system re-enables external refresh signals.
摘要:
An apparatus and method for passing messages through a bus-to-bus bridge while maintaining ordering. The method comprises passing messages into a message container in the bus bridge without using the bridge buffer, setting a flag that tracks all the writes in the write queue ahead of when the message was put into the message container, blocking the receiving device on the bus connected to the bridge from accessing the message container until the flag is cleared, and clearing the flag when all the writes put into the write queue ahead of when the flag was set have been written to local memory on the receiving bus, then allowing the device on the receiving bus that is the intended recipient to receive the message.
摘要:
A method and apparatus for receiving and transmitting programming data through an application specific integrated circuit is provided. In a first embodiment, the application specific integrated circuit comprises a main circuit, at least two input/output (I/O) mechanisms connected to the main circuit for transferring data into and out of the main circuit and a mechanism for receiving and transmitting the programming data. The mechanism for transmitting the programming data includes a tri-state buffer that is activated by a programming enable signal. In a second embodiment, the input and output of the buffer are multiplexed with the two I/O mechanisms connected to the main circuit.