摘要:
Method and apparatus for changing the sequential execution of instructions in a pipelined instruction processor by using a microcode controlled redirect controller. The execution of a redirect instruction by the pipelined instruction processor provides a number of microcode bits including a target address to the redirect controller, a predetermined combination of the microcode bits then causes the redirect controller to redirect the execution sequence of the instructions from the next sequential instruction to a target instruction.
摘要:
A method and apparatus for reducing processor response time to selected transfer instructions in an multi-instruction processor. The response time is shortened by using a fast path to generate addresses for selected transfer instructions. In this fast path a base address, retained in a register from a previous instruction, is summed with an offset from the current instruction to obtain an absolute address for memory accessing. Before the fast path is entered determinations are made whether the instruction is a particular transfer instruction of a particular class and subclass, and whether the base address is different than the base address for the previous instruction. Even through the fast path is entered the usual absolute address generator path is also entered where the instruction is subjected to both high and low limit tests. If the high and low limit test determine a different base is to be used, the absolute address from the main address generator is used, instead of the absolute address from the LXJ fast path, and the system is restored to the conditions that would have prevailed if the fast path had not been entered.
摘要:
A method of and apparatus for efficiently halting the operation of the instruction processor when a cache miss is detected. Generally, this is accomplished by preventing unwanted address incrementation of an instruction address pipeline and by providing a null instruction to an instruction pipeline when a cache miss is detected. Accordingly, the present invention may eliminate a recovery period after a cache miss, thereby enhance the performance of the data processing system. Further, the present invention may eliminate recovery hardware required to support the recovery process.
摘要:
A synchronous pipeline design is provided that includes a first predetermined number of fetch logic sections, or “stages”, and a second predetermined number of execution stages. Instructions are retrieved from memory and undergo instruction pre-decode and decode operations during the fetch stages of the pipeline. Thereafter, decoded instruction signals are passed to the execution stages of the pipeline, where the signals are dispatched to other execution logic sections to control operand address generation, operand retrieval, any arithmetic processing, and the storing of any generated results. Instructions advance within the various pipeline fetch stages in a manner that may be independent from the way instructions advance within the execution stages. Thus, in certain instances, instruction execution may stall such that the execution stages of the pipeline are not receiving additional instructions to process. This may occur, for example, because an operand required for instruction execution is unavailable. It may also occur for certain instructions that require additional processing cycles. Even though instructions are not entering the execution stages, instructions may continue to enter the fetch stages of the pipeline until all fetch stages are processing a respective instruction. As a result, when normal instruction execution resumes within the execution stages of the pipeline, all fetch stages of the pipeline have been filled, and pre-decode and decode operations have been completed for those instructions awaiting the entry into the execution stages of the pipeline.
摘要:
A method of and apparatus for rapidly modifying the user base registers of an instruction processor. In accordance with the present invention, a load base register user instruction may request an operand from a cache memory, wherein the requested operand may provide a new L field and a new bank descriptor index field. An unconditional compare may be made between the new L,BDI fields and the prior L,BDI fields, regardless of whether the requested operand providing the new L,BDI fields actually resides in a corresponding operand cache. In parallel therewith, the operand cache may determine whether or not the requested operand that provided the new L,BDI fields actually resides in the cache memory. A selector block may then determine if the new L,BDI fields match the previous L,BDI fields, and if the requested operand that provided the new L,BDI fields actually resides in the cache memory. If so, a fast load base register algorithm may be used to load the base register. If not, a slow load base register algorithm may be used.
摘要:
A method and apparatus to control logic sections of a pipeline instruction processor is disclosed. A state machine is provided that models the flow of instructions through the pipeline. The state machine is capable of modeling execution for all combinations of instruction types that may be present within the pipeline at a given time. The state machine also models various events that affect the way instruction execution is overlapped within the pipeline, and other system occurrences that may cause the termination of some processing activity within the pipeline. The state machine provides signals to control the various logic sections. These signals may be used to determine whether the results of processing activity within the logic sections should be retained or discarded.
摘要:
An apparatus for and method of providing a data processing system that delays the writing of an architectural state change value to a corresponding architectural state register for a predetermined period of time. This may provide the instruction processor with enough time to determine if the architectural state change is valid before the architectural state change is actually written to the appropriate architectural state register.
摘要:
A system and method is provided for selectively injecting interrupts within the instruction stream of a data processing system. The system includes a programmable storage device for storing interrupt injection signals, each of which is associated with a respective machine instruction. When execution of the associated machine instruction is initiated, the stored signal is read from the storage device and is made available to the interrupt logic within the instruction processor. If set to a predetermined logic level, the signal causes an interrupt to be injected within the instruction processor. The system provides the capability to simultaneously inject different types of interrupts, including fault and non-fault interrupts, during the execution of any instruction. The invention further provides a programmable means for injecting errors at predetermined intervals in the instruction stream. Because the current invention allows interrupt injection to be controlled by programmable logic within the instruction processor itself instead by stimulus generated and controlled by a simulation program as in prior art systems, there is no need to develop complex simulation programs to generate and control the external stimulus. Any simulation program can utilize the interrupt injection system to test the interrupt logic. Furthermore, the injected interrupts are handled in a manner which is transparent to the system software, which makes development of test-version interrupt handling code unnecessary. Moreover, the interrupt injection system may be used during normal (non-test) situations to place the instruction processor under microcode control. This can be useful to provide temporary fixes to hardware problems in a manner which is transparent to the operating system.
摘要:
A system and method are provided for detecting and recovering from errors in an Instruction Cache RAM and/or Operand Cache RAM of an electronic data processing system. In some cases, errors in the Instruction Cache RAM and/or Operand Cache RAM are detected and recovered from without any required interaction of an operating system of the data processing system. Thus, and in many cases, errors in the Instruction Cache RAM and/or Operand Cache RAM can be handled seamlessly and efficiently, without requiring a specialized operating system routine, or in some cases, a maintenance technician, to help diagnose and/or fix the error.
摘要:
A method and apparatus is provided for handling parity errors within a data processing system. Each occurrence of a parity error is attributed to an addressable memory location or a block of memory locations that was being accessed when the error occurred. A memory location or a memory block is marked as unusable after a predetermined number of errors is attributed to that location or block, respectively. The predetermined number of errors that is allowed to occur prior to degradation could be two, or more. In one embodiment, the predetermined number of errors resulting in memory degradation is programmable.