Abstract:
A multiprocessor system of the present invention has an address bus, a data bus, first and second processors, four access queues, first and second arbiters, and a shared memory divided into four banks. The four access queues are constituted by first-in first-out memories for buffering a plurality of access-request addresses transmitted through the address bus. When a processor requires data from the memory bank, the processor sends a processor ID with a data access request. When the memory bank sends data in return, the memory bank outputs the processor ID of the request originator with the required data. Even if continuous access requests are addressed to one bank of the shared memory, a succeeding access requested need not wait for a previous access request to be finished. According, the throughput of the system can be improved greatly. The first and second arbiters serve to decide ownership of buses.
Abstract:
A data processor is provided with a first register storing a first half word of one instruction; a second register storing a second half word of the instruction; a first decoder decoding the first half word and at the same time detecting whether there exists an addressing extension portion between the first half word and the second half word; a second decoder decoding the second half word; and, a decode result generating circuit, to which a detection signal of the first decoder indicates whether the addressing extension portion exists. A decode result of the first decoder and a decode result of the second decoder are supplied to the decode result generating circuit. An extension portion register is provided to store the addressing extension portion. When the first decoder detects the addressing extension portion, the decode result generating circuit invalidates the decode result of the second decoder. On the other hand, in the case where there exists no addressing extension portion, the decode result generating circuit judges, on the basis of the detection signal, that the decode result of the second decoder is valid.
Abstract:
Herein disclosed is a multiprocessor system which comprises first and second processors (1001 and 1002), first and second cache memories (100:#1 and #2), an address bus (123), a data bus (126), an invalidating signal line (PURGE:131) and a main memory (1004). The first and second cache memories are operated by the copy-back method. The state of the data of the first cache (100:#1) exists in one state selected from a group consisting of an invalid first state, a valid and non-updated second state and a valid and updated third state. The second cache (100:#2) is constructed like the first cache. When the write access of the first processor hits the first cache, the state of the data of the first cache is shifted from the second state to the third state, and the first cache outputs the address of the write hit and the invalidating signal to the address bus and the invalidating signal line, respectively. When the write access from the first processor misses the first cache, a data of one block is block-transferred from the main memory to the first cache, and the invalidating signal is outputted. After this, the first cache executes the write of the data in the transfer block. In case the first and second caches hold the data in the third state relating to the pertinent address when an address of an access request is fed to the address bus (123), the pertinent cache writes back the pertinent data in the main memory.
Abstract:
A processor system including: a processor and controller core connected via an internal bus; and a plurality of synchronous memory chips connected to the processor via an external bus; the controller core including a mode register selected by an address signal from the processor core and written with an information by a data signal from the processor core to select the operation mode of the plurality of synchronous memory chips, and a control unit to prescribe the operate mode to the plurality of synchronous memory chips based on the information written in the mode register, wherein the controller core outputs a mode setting signal based on the information written in the mode register or an access address signal from the processor core to the plurality of synchronous memory chips via the external bus selectively; and wherein the clock signal is commonly supplied to the plurality of synchronous memory chips.
Abstract:
A chip including: a microprocessor; a control unit coupled to the microprocessor; and interface nodes for coupling a synchronous dynamic memory; wherein the control unit generates command information and the interface nodes output the command information to the synchronous dynamic memory in synchronism with a clock signal, wherein the command information includes a mode register set function which sets mode information to a mode register in the synchronous dynamic memory, and wherein the control unit outputs the mode information to address signal input terminals of the synchronous dynamic memory.
Abstract:
When a leakage current of a circuit block under a non-use state is reduced by means of a power switch, frequent ON/OFF operations of the switch within a short time invite an increase of consumed power, on the contrary. Because a pre-heating time is necessary from turn-on of the switch till the circuit block becomes usable, control of the switch during an operation deteriorates a processing time of a semiconductor device. The switch is ON/OFF-controlled with a task duration time of a CPU core for controlling logic circuits and memory cores as a unit. After the switch is turned off, the switch is again turned on before termination of the task in consideration of the pre-heating time.
Abstract:
The invention allows the execution of a PC relative branch instruction with displacement is speeded up without changing the instruction operations of existing processors and without requiring new instructions. The branch target address calculation is made faster by calculating the lower portion of the branch target address prior to storing the instruction word in a cache or buffer, and writing the calculation result into the displacement field of the instruction word and into a bit that has been added to the cache or the buffer, such that some calculation is executed simultaneously to be skipped later at the time of execution of the instruction by using the executed calculation result stored in the cache or buffer.
Abstract:
The feature of the present invention consists in: a processor main circuit for executing program instruction strings on a processor chip; a substrate bias switching unit for switching voltages of substrate biases applied to a substrate of the processor main circuit; and an operation mode control unit for controlling, in response to the execution of an instruction to proceed to a stand-by mode in the processor main circuit, the substrate bias switching unit in such a way that the biases are switched over to voltages for the stand-by mode, and for controlling, in response to an interruption of the stand-by release from the outside, the substrate bias switching unit in such a way that the biases are switched over to voltages for a normal mode, and also for releasing, after the bias voltages switched thereto have been stabilized, the stand-by of the processor main circuit to restart the operation.
Abstract:
A small-area associative memory for association by a value resulted from addition, with reduced carry delay, is disclosed. Adjacent 1-bit memory values and a signal depending on adjacent 2 bits of addition-inputs are inputted into a CAM memory cell constructed using MOS transistors, and a hit line is pulled down or pulled up in accordance with the input values.
Abstract:
A data processing system including a processor LSI and a DRAM divided into banks, for increasing a ratio of using a fast operation mode for omitting transfer of a row address to the DRAM and for minimizing the amount of logics external to the processor LSI. The processor LSI includes row address registers for holding recent row addresses corresponding to the banks. The contents of the row address registers are compared with an accessed address by a comparator to check for each bank whether the fast operation mode is possible. As long as the row address does not change in each bank, the fast operation mode can be used, thus making it possible to speed up operations, for example in block copy processing.