摘要:
Addresses of all of dirty blocks of a cache memory are, by an update address registering section, stored in one of plural regions of an update address memory. When a certain cache block is brought to a dirty state and then suspended from the dirty state, the update address removing section removes the address from the region. When cache flush is performed, a flush executing section sequentially fetches the addresses of the dirty blocks from each region to issue, to the system bus, a command for writing-back data indicated by the address into the main memory so that the contents of all of the a dirty block are written-back into the main memory. Therefore, the cache flush apparatus according to the present invention is able to shorten time required to perform the cache flush procedure and to improve the performance of a computer system having the cache flush apparatus.
摘要:
Log memories for recording updated history of a main memory are provided. CPUs record the updated history of the main memory to either of the log memories and writes context thereof and content of a cache memory to the main memory at a checkpoint acquisition. The updated history of the main memory is switched from one of CPUs that has finished a checkpoint processing to other one of the log memories in which the CPUs do not use to record the updated history of the main memory. Normal processing is restarted without waiting for finishing the checkpoint acquisition of the other ones of CPUs.
摘要:
In a shared memory type multiprocessing system, when data stored in each processing element is managed in a directory method, a plurality of processing elements are grouped in advance. A directory memory is provided along with a data memory mounted in a shared memory. Directory information held in the directory memory indicates which one of the groups holds a copy of a data block. In response to a request from the processing element, the shared memory executes a process of finding the processing element from the processing group, or a process of finding the processing group from the processing element.
摘要:
A compiling method, for compiling a source program into an object program for a CPU having multiple functional units that allow for concurrent operations and supporting predicated execution, for generating the object program that can be executed on the CPU at high speed by analyzing the source program and generating intermediate codes, making an analysis of the intermediate codes, generating, based on the analysis, an execution mode set instruction to set an execution mode managed within the CPU, allocating, based on the analysis, instructions such that whether they are to be executed or not to be executed depends on the execution mode set by the execution mode set instruction from the intermediate codes, wherein one or more instructions in which values in their respective specific fields are identical make an block together for every value in the specific field, finding, for each block, an ending part of the block in which its last instruction is allocated, and generating, when the ending part of a certain block is to be earlier in the object program than the ending part of another block, an unconditional branch instruction identical in specific field value to the instructions in the certain block, and allocating it either to be executed in the ending part of the certain block or to be executed as immediately as possible after the ending part of the block.
摘要:
A bus interface is connected to a system bus for monitoring a bus command indicating that data is updated on a cache memory of a processor. If the data is updated on the cache memory, the external tag storage device stores state information to indicate the update of the data and a physical address corresponding to the updated data. An external tag reading device reads the state information stored in the external tag storage device, when the updated data on the cache memory is stored in a main memory. A bus command for flushing the updated data from the cache memory to the main memory is generated, based on the state of the tag read out from the external tag storage device. An invalid bus command generation device outputs an invalid bus command to the system bus through a FIFO.
摘要:
Disclosed is a very large instruction word (VLIW) type parallel processing computer architecture. The VLIW is divided into operation field groups which are made up of operands. Each operand in a VLIW is executed by a different processor. The computer contains independent register files for each of the respective operation field groups in a single instruction word. Data transfer between the registers is executed via signal lines which connect registers that are designated as operands in each operation field group to each other. The data transfer between register files is directed by a command which is included as an operand for one of the processors. This command eliminates the need for a destination field in each operand simplifying the VLIW.
摘要:
A before-image buffer controller is arranged separately from a memory controller and is connected to a system bus. When there is a write access request from a CPU to a cache memory corresponding to this CPU, the before-image buffer controller is automatically started in response to a command issued from this cache memory onto the system bus, and issues a command for reading previous data from a main memory. Since the before-image buffer controller operable independently of the memory controller is arranged in this way, a memory state restore function can be easily realized by using an existing computer system as it is without changing a memory controller.
摘要:
In a memory state recovering apparatus, processors process data and a main memory holds data necessary for the data processing at the processors. Caches are provided to correspond to the processors and have the function of issuing an invalidation transaction to specify the invalidation of the data to maintain the consistency of the data. A before image buffer combines an address in the main memory with the data held in the location indicated by the address and stores the combination. A memory access control section stores in the buffer memory the address targeted in the main memory and the data stored in the location indicated by the address in accordance with the invalidation transaction issued from the caches. With this configuration, the time required for a checkpoint process can be shortened, thereby improving the system performance.
摘要:
In a memory state recovering apparatus, processors process data and a main memory holds data necessary for the data processing at the processors. Caches are provided to correspond to the processors and have the function of issuing an invalidation transaction to specify the invalidation of the data to maintain the consistency of the data. A before image buffer combines an address in the main memory with the data held in the location indicated by the address and stores the combination. A memory access control section stores in the buffer memory the address targeted in the main memory and the data stored in the location indicated by the address in accordance with the invalidation transaction issued from the caches. With this configuration, the time required for a checkpoint process can be shortened, thereby improving the system performance.
摘要:
A path trace apparatus according to this invention for tracing path having specified start, middle and end points in a network comprises means for tracing a path in a forward direction from the middle point to the end point based on data representing the coupling relationship between elements constituting the network, and means for tracing a path in a reverse direction from the middle point to the start point based on the data. In tracing a path in a network by specifying three points, signal paths are traced in the forward and reverse directions with the middle point as the center, thus eliminating the wasteful tracing of paths which do not pass the middle point and thereby significantly increasing the tracing efficiency.