摘要:
Methods of applying LSI and microprocessors to the design of microprocessor-based LSI implementation of mainframe processors are described. A mainframe instruction set is partitioned into two or more subsets, each of which can be implemented by a microprocessor having special on-chip microcode or by a standard off-the-shelf microprocessor running programs written for that purpose. Alternatively, one or more of the subsets can be implemented by a single microprocessor. In addition, a subset of the partitioned instruction set can be implemented by emulating software, by off-chip vertical or horizontal microcode, or by primitives. But, however partitioning is implemented, the end result thereof is to keep the critical flow paths, associated with the most frequently used instruction subset, as short as possible by constraining them to a single chip. The application of this method requires partitioning that makes each identified high performance subset executable on one microprocessor in the current state of technology, a way to quickly pass control back and forth between all of the microprocessors, a suitable way to pass data back and forth between all of the microprocessors, and a technology in which it is economically feasible to have several copies of a complex data flow and control store mechanism.
摘要:
A microprocessor chip which is capable of executing a specific subset of instructions on behalf of the main storage portion of a computer memory can be made to emulate direct execution instructions not in that specific subset while working on behalf a control storage portion of the computer memory in a manner which is transparent to the main storage portion by means of a novel set of operand space selection instructions in the control storage portion and a novel switching circuit on the microprocessor chip which controls the access of the chip to the control store portion and the main store portion.
摘要:
The performance of a multimicroprocessor implemented data processing system that emulates a mainframe is enhanced by providing a pair of override latches that serve to steer accesses between main and control storage for instruction fetch and operand acquisition in a manner that minimizes the complexity and size of microprocessor interface microcoding. This is achieved by connecting the instruction and operand override latches between a primary microprocessor, a secondary microprocessor, off-chip control storage belonging to the secondary microprocessor, particularly memory mapped private storage therein, and main storage. The override latches are made responsive, via microcode provided for that purpose, to the type and cause of each memory access. The override latches are set or reset by a memory mapped write to a predefined address in the secondary control store after being enabled by control lines responsive to the particular microprocessor action being taken. When set, the instruction override latch directs all expected primary processor main storage instruction fetches to control store. When set, the operand override latch directs all expected primary processor main storage operand accesses to control store. As appropriate for instruction execution, either one or both of the primary or secondary microprocessors can thereby be transparently latched to main or control storage.
摘要:
A bus-to-bus adapter is provided for coupling the input/output bus of a first data processor to the input/output bus of a second and different type of data processor. The adapter enables the transfer of data and messages from the first processor to the second processor and vice versa. The adapter includes a buffer storage unit and control logic for enabling multiple data buffers to be provided for enabling multiple independent data transfer operations to be performed in a concurrent manner. The control logic also includes a mechanism for allowing the reading out of data from a data buffer to begin before such data buffer has received all of its incoming data. The adapter further includes a programmable service time allocation mechanism for limiting message service time relative to data transfer service time and for providing different amounts of data transfer service time for different ones of the multiple data buffers.