摘要:
In a processor, store instructions are divided or cracked into store data and store address generation portions for separate and parallel execution within two execution units. The address generation portion of the store instruction is executed within the load store unit, while the store data portion of the instruction is executed in an execution unit other than the load store unit. If the store instruction is a fixed point execution unit, then the store data portion is executed within the fixed point unit. If the store instruction is a floating point store instruction, then the store data portion of the store instruction is executed within the floating point unit.
摘要:
A microprocessor and related method and data processing system are disclosed. The microprocessor includes a dispatch unit suitable for issuing an instruction executable by the microprocessor, an execution pipeline configured to receive the issued instruction, and a pending instruction unit. The pending instruction unit includes a set of pending instruction entries. A copy of the issued instruction is maintained in one of the set of pending instruction entries. The execution pipeline is adapted to record, in response detecting to a condition preventing the instruction from successfully completing one of the stages in the pipeline during a current cycle, an exception status with the copy of the instruction in the pending instruction unit and to advance the instruction to a next stage in the pipeline in the next cycle thereby preventing the condition from stalling the pipeline. Preferably, the dispatch unit, in response to the instruction finishing pipeline execution with an exception status, is adapted to use the copy of the instruction to re-issue the instruction to the execution pipeline in a subsequent cycle. In one embodiment, the dispatch unit is adapted to deallocate the copy of the instruction in the pending instruction unit in response to the instruction successfully completing pipeline execution. The pending instruction unit may detect successful completion of the instruction by detecting when the instruction has been pending for a predetermined number of cycles without recording an exception status. In this embodiment, each entry in the pending instruction unit may include a timer field comprising a set of bits wherein the number of bits in the time field equals the predetermined number of cycles. The pending instruction unit may set, in successive cycles, successive bits in the timer field such that successful completion of an instruction is indicated when a last bit in the time field is set. In one embodiment, pending instruction unit includes a set of copies of instructions corresponding to each of a set of instructions pending in the execution pipeline at any given time. In various embodiments, the execution pipeline may comprise a load/store pipeline, a floating point pipeline, or a fixed point pipeline.
摘要:
A superscalar processor and method are disclosed for efficiently recovering from misaligned data addresses. The processor includes a memory device partitioned into a plurality of addressable memory units. Each of the plurality of addressable memory units has a width of a first plurality of bytes. A determination is made regarding whether a data address included within a memory access instruction is misaligned. The data address is misaligned if it includes a first data segment located in a first addressable memory unit and a second data segment located in a second addressable memory unit where the first and second data segments are separated by an addressable memory unit boundary. In response to a determination that the data address is misaligned, a first internal instruction is executed which accesses the first memory unit and obtains the first data segment. A second internal instruction is executed which accesses the second memory unit and obtains the second data segment. The first and second data segments are merged together. All of the instructions executed by the processor are constrained by the memory boundary and do not access memory across the memory boundary.
摘要:
A processor and an associated method and data processing system are disclosed. The processor includes an issue unit (ISU), a completion unit, and a hang detect unit. The ISU is configured to issue instructions to an execution unit. The completion unit is adapted to produce a completion valid signal responsive to the issue unit completing an instruction. The hang detect unit is configured to receive the completion valid signal from the ISU and adapted to determine the interval since the most recent assertion of the completion valid signal. The hang detect unit is adapted to initiate a hang recovery sequence upon determining that the interval since the most recent assertion of the completion valid signal exceeds a predetermined maximum interval. In one embodiment, the hang recovery sequence includes the hang recovery unit asserting a stop completion signal to a completion unit and a stop dispatch signal to a dispatch unit to suspend instruction completion and dispatch. The hang recovery unit then asserts a force reject signal to an execution unit to reject all instructions pending in the execution unit's pipeline and a flush signal to the execution unit that results in the processor flushing a set of instructions. The hang recovery unit then negates the force reject, stop completion, and stop dispatch signals to resume processor operation. In one embodiment, the recovery sequence includes entering a relaxed execution mode, such as a debug mode, a serial operation mode, or an in-order mode prior to resuming processor operation. In one embodiment, the processor advances a completion tag upon completing an instruction. In this manner the completion tag indicates the instruction that is next to complete. In one embodiment, the hang recovery sequence includes flushing the processor of an instruction set comprising all instructions with tag information greater than the completion tag. In another embodiment, all instructions with tag information greater than or equal to the completion tag are flushed.
摘要:
A method and system for atomic memory accesses in a processor system, wherein the processor system is able to issue and execute multiple instructions out of order with respect to a particular program order. A first reservation instruction is speculatively issued to an execution unit of the processor system. Upon issuance, instructions queued for the execution unit which occur after the first reservation instruction in the program order are flushed from the execution unit, in response to detecting any previously executed reservation instructions in the execution unit which occur after the first reservation instruction in the program order. The first reservation instruction is speculatively executed by placing a reservation for a particular data address of the first reservation instruction, in response to completion of instructions queued for the execution unit which occur prior to the first reservation instruction in the program order, such that reservation instructions which are speculatively issued and executed in any order are executed in-order with respect to a partnering conditional store instruction.
摘要:
A processor (100) includes an issue unit (125) having an issue queue (144) for issuing instructions to an execution unit (140). The execution unit (140) may accept and execute the instruction or produce a reject signal. After each instruction is issued, the issue queue (144) retains the issued instruction for a critical period. After the critical period, the issue queue (144) may drop the issued instruction unless the execution unit (140) has generated a reject signal. If the execution unit (140) has generated a reject signal, the instruction is eventually marked in the issue queue (144) as being available to be reissued. The length of time that the rejected instruction is held from reissue may be modified depending upon the nature of the rejection by the execution unit (140). Also, the execution unit (140) may conduct corrective actions in response to certain reject conditions so that the instruction may be fully executed upon reissue.
摘要:
Pipelining and parallel execution of multiple load instructions is performed within a load store unit. When a first load instruction incurs a cache miss and proceeds to retrieve the load data from the system memory hierarchy, a second load instruction addressing the same load data will be merged into the first load instruction so that the data returned from the system memory hierarchy is sent to register files associated with both the first and second load instructions. As a result, the second load instruction does not have to wait until the load data has been written and validated in the data cache.
摘要:
A method for optimally issuing instructions that are related to a first instruction in a data processing system is disclosed. The processing system includes a primary and secondary cache. The method and system comprises speculatively indicating a hit of the first instruction in a secondary cache and releasing the dependent instructions. The method and system includes determining if the first instruction is within the secondary cache. The method and system further includes providing data related to the first instruction from the secondary cache to the primary cache when the instruction is within the secondary cache. A method and system in accordance with the present invention causes instructions that create dependencies (such as a load instruction) to signal an issue queue (which is responsible for issuing instructions with resolved conflicts) in advance, that the instruction will complete in a predetermined number of cycles. In an embodiment, a core interface unit (CIU) will signal an execution unit such as the Load Store Unit (LSU) that it is assumed that the instruction will hit in the L2 cache. An issue queue uses the signal to issue dependent instructions at an optimal time. If the instruction misses in the L2 cache, the cache hierarchy causes the instructions to be abandoned and re-executed when the data is available.
摘要:
To support load instructions which execute out-of-order with respect to store instructions, a mechanism is implemented to detect (and correct) the occurrences where a load instruction executed prior to a logically prior store instruction, and where the load instruction received data for the location prior to being modified by the store instruction, and the correct data for the load instruction included bytes from the store instruction. Additionally, to execute store instructions out-of-order with respect to load instructions, a mechanism is implemented to keep a store instruction from destroying data that will be used by a logically earlier load instruction. Further, to support load instructions that are executed out-of-order with respect to each other, a mechanism is implemented to insure that any pair of load instructions (which access at least one byte in common) return data consistent with executing the load instructions in order.
摘要:
A microprocessor, data processing system, and method are disclosed for handling parity errors in an address translation facility such as a TLB. The microprocessor includes a load/store unit configured to generate an effective address associated with a load/store instruction. An address translation unit adapted to translate the effective address to a real address using a translation lookaside buffer (TLB). The address translation unit includes a parity checker configured to verify the parity of the real address generated by the TLB and to signal the load store unit when the real address contains a parity error. The load store unit is configured to initiate a TLB parity error interrupt routine in response to the signal from the translation unit. In one embodiment, the TLB interrupt routine selectively invalidates the TLB entry that contained the parity error. The load/store unit preferably includes an effective to real address table (ERAT) containing a set of address translations. In this embodiment, the load/store unit invokes the address translation unit to translate the effective address only if the effective address misses in the ERAT. The LSU may suitably include an ERAT miss queue (EMQ) adapted to retain an effective address that misses in the ERAT until the address translation unit completes the translation process. In this embodiment, the EMQ is configured to issue a TLB parity error interrupt signal to initiate the TLB parity error interrupt routine. In one embodiment, the TLB interrupt routine loads a data address register (DAR) of the microprocessor with the effective address of the instruction that resulted in the parity error. The TLB interrupt routine may further set a data storage interrupt routine status register (DSISR) to indicate the TLB parity error.