摘要:
Intermediate results are passed between constituent instructions of an expanded instruction using register renaming resources and control logic. A first constituent instruction generates intermediate results and is assigned a PRN in a constituent instruction rename table, and writes intermediate results to the identified physical register. A second constituent instruction performs a look up in the constituent instruction rename table and reads the intermediate results from the physical register. Constituent instruction rename logic tracks the constituent instructions through the pipeline, and delete the constituent instruction rename table entry and returns the PRN to a free list when the second constituent instruction has read the intermediate results.
摘要:
A conditional instruction architected to receive one or more operands as inputs, to output to a target the result of an operation performed on the operands if a condition is satisfied, and to not provide an output if the condition is not satisfied, is executed so that it unconditionally provides an output to the target. The conditional instruction obtains the prior value of the target (that is, the value produced by the most recent instruction preceding the conditional instruction that updated that target). The condition is evaluated. If the condition is satisfied, an operation is performed and the result of the operation output to the target. If the condition is not satisfied, the prior value is output to the target. Subsequent instructions may rely on the target as an operand source (whether written to a register or forwarded to the instruction), prior to the condition evaluation.
摘要:
A processor includes a conditional branch instruction prediction mechanism that generates weighted branch prediction values. For weakly weighted predictions, which tend to be less accurate than strongly weighted predictions, the power associating with speculatively filling and subsequently flushing the cache is saved by halting instruction prefetching. Instruction fetching continues when the branch condition is evaluated in the pipeline and the actual next address is known. Alternatively, prefetching may continue out of a cache. To avoid displacing good cache data with instructions prefetched based on a mispredicted branch, prefetching may be halted in response to a weakly weighted prediction in the event of a cache miss.
摘要:
Intermediate results are passed between constituent instructions of an expanded instruction using register renaming resources and control logic. A first constituent instruction generates intermediate results and is assigned a PRN in a constituent instruction rename table, and writes intermediate results to the identified physical register. A second constituent instruction performs a look up in the constituent instruction rename table and reads the intermediate results from the physical register. Constituent instruction rename logic tracks the constituent instructions through the pipeline, and delete the constituent instruction rename table entry and returns the PRN to a free list when the second constituent instruction has read the intermediate results.
摘要:
A renaming register file complex for saving power is described. A mapping unit transforms an instruction register number (IRN) to a logical register number (LRN). The renaming register file maps an LRN to a physical register number (PRN), there being a greater number of physical registers than addressable by direct use of the IRN. The renaming register file uses a content addressable memory (CAM) to provide the mapping function. The renaming register file CAM further uses current processor state information to selectively enable tag comparators to minimize power in accessing registers. When a tag comparator is not enabled it remains in a low power state. A processor using a renaming register file with low power features is also described.
摘要:
Systems and methods are disclosed for maintaining an instruction cache including extended cache lines and page attributes for main cache line portions of the extended cache lines and, at least for one or more predefined potential page-crossing instruction locations, additional page attributes for extra data portions of the corresponding extended cache lines. In addition, systems and methods are disclosed for processing page-crossing instructions fetched from an instruction cache having extended cache lines.
摘要:
The disclosure relates to predicting simple and polymorphic branch instructions. An embodiment of the disclosure detects that a program instruction is a branch instruction, determines whether a program counter for the branch instruction is stored in a program counter filter, and, if the program counter is stored in the program counter filter, prevents the program counter from being stored in a first level predictor.
摘要:
A method of processing a plurality of instructions in multiple pipeline stages within a pipeline processor is disclosed. The method partially or wholly executes a stalled instruction in a pipeline stage that has a function other than instruction execution prior to the execution stage within the processor. Partially or wholly executing the instruction prior to the execution stage in the pipeline speeds up the execution of the instruction and allows the processor to more effectively utilize its resources, thus increasing the processor's efficiency.
摘要:
Systems and methods are disclosed for maintaining an instruction cache including extended cache lines and page attributes for main cache line portions of the extended cache lines and, at least for one or more predefined potential page-crossing instruction locations, additional page attributes for extra data portions of the corresponding extended cache lines. In addition, systems and methods are disclosed for processing page-crossing instructions fetched from an instruction cache having extended cache lines.
摘要:
The disclosure relates to predicting simple and polymorphic branch instructions. An embodiment of the disclosure detects that a program instruction is a branch instruction, determines whether a program counter for the branch instruction is stored in a program counter filter, and, if the program counter is stored in the program counter filter, prevents the program counter from being stored in a first level predictor.