摘要:
The present invention relates to a memory control method and a memory control device each suitable for information processing systems such as multiprocessing systems where plural data processing systems execute concurrently an operating process, and particularly to a memory control method and a memory control device each of which controls the data holding state of a buffer memory unit (7) arranged in each of data processing units on a store-in basis to gain high speed access to the main storage unit. The memory control device issues a predetermined process command to be sent to the buffer memory unit (7) to the data processing unit (2), and sets a flag showing a process under request, to a portion to be processed by the predetermined process command in the tag copying unit (5). This structure can reduce the amount of hardware and improve the port use efficiency, thus realizing the system constructing cost reduced and the processing speed improved.
摘要:
An improved memory system for a shared memory multiprocessor computer system in which one or more processor modules and/or input/output modules have cache memories (50, 52, 54). The main memory controller (14, 16) for each main memory (15, 17) of the system maintains a duplicate cache tag array (44, 46) containing current information on the status of data lines from that main memory that are stored in the cache memories (50, 52, 54). Thus, coherency checks can be performed directly by the main memory controller (14, 16). This eliminates the need for each processor having a cache memory to perform a separate coherency check and to communicate the results of its coherency checks to the main memory controller, and thereby reduces delays associated with processing coherent transactions.
摘要:
A mechanism prioritizes cross interrogate requests between multiple requestors in a multi-processor system where the delay due to cable length interconnecting requestors results in requests not being received within one machine cycle. Local and remote cross interrogate (XI) requests are latched up in storage control element (SCE) temporary registers before being prioritized. The local request is staged in a local delay register. The local request is selected from the local delay register by synchronization control logic, instead of the temporary register, when the remote request is issued one cycle earlier than the local request, or when both local and remote requests are issued at the same time, but the remote requests is from a master SCE. The staging of the local requests can be extended to multiple cycles, corresponding to the length of the cables between SCEs.
摘要:
A hierarchical memory control system comprising N central processing units (1) each including a store-in type buffer storage unit (2); a main storage unit (3) commonly used by the N central processing units (1); a global buffer storage unit (5) of a store-in type connected between the central processing units (1) and the main storage unit (3), for storing a data block transferred from the main storage unit (3), each entry of the global buffer storage unit (5) being larger than each entry of the buffer storage unit (2), the data block in each entry of the global buffer storage unit (5) being divided into M divided blocks; a tag unit (7) for managing the entries of the global buffer storage unit (5), including tags respectively corresponding to the entries of the global buffer storage unit (5), each tag including managing data for managing the data block; and a buffer control unit (8) for controlling the managing data in the tag unit (7), the buffer control unit (8) controlling the tag unit (7) and the global buffer storage unit (5) in such a way that, when the data stored in the buffer storage unit (2) is modified, the modified data is reflected at the global buffer storage unit (5) in accordance with the managing data in the tag unit (7), and when the data stored in the global buffer storage unit (5) is modified, the modified data is reflected at the main storage unit (3) in accordance with the managing data in the tag unit (7).
摘要:
A method and apparatus for increasing cache concurrency in a multiprocessor system. In a multiprocessor system having a plurality of processors each having a local cache in order to increase concurrency the directory entry for a line in local cache will be assigned an LCH bit for locally changed status. If the last cache to hold the line had made a change to it this bit will be set on. If not, the bit will be off and thereby allow the receiving or requesting cache to make change to the line without requiring a main storage castout.
摘要:
A tightly coupled multi-processor (MP) system is provided with large granularity locking of exclusivity in multi-processor caches. The unique access right for a processor P i is enforced by giving other central processors (CPs) a temporarily invalid (TI) state on block (B), even though some lines in the block (B) may still be resident in the cache. Any CP trying to access a block in the temporarily invalid (TI) state will need to talk to the storage control element (SCE) to obtain proper authorization (e.g., RO or EX state) on the block (B). Assuming that a central processor(CP) may have three states on a block (B), temporarily invalid TI B , read only RO B and exclusive EX B , TI B is the initial state for all blocks (B) at all central processors (CPs).
摘要:
In a data processing system having multiple processors with individual cache stores, a cross-interrogation is made in other caches if requested data is not found in the local associated cache. Data paths (67, 74, 83, 86, 201, 202, 203, 207, 208) and communication controls (20, 21, 22, 23, 50, 76, 77) are provided to enable direct cache-to-cache and cache-to-channel data transfers thus saving main storage accesses and the necessity to wait for main storage availability.
摘要:
A method for managing caches, including: broadcasting, by a first cache agent operatively connected to a first cache and using a first physical network, a first peer-to-peer (P2P) request for a memory address; issuing, by a second cache agent operatively connected to a second cache and using a second physical network, a first response to the first P2P request based on a type of the first P2P request and a state of a cacheline in the second cache corresponding to the memory address; issuing, by a third cache agent operatively connected to a third cache, a second response to the first P2P request; and upgrading, by the first cache agent and based on the first response and the second response, a state of a cacheline in the first cache corresponding to the memory address.