Abstract:
A computer system including a microprocessor employing a reorder buffer is provided which stores a last in buffer (LIB) indication corresponding to each instruction. The last in buffer indication indicates whether or not the corresponding instruction is last, in program order, of the instructions within the buffer to update the storage location defined as the destination of that instruction. The LIB indication is included in the dependency checking comparisons. A dependency is indicated for a given source operand and a destination operand within the reorder buffer if the operand specifiers match and the corresponding LIB indication indicates that the instruction corresponding to the destination operand is last to update the corresponding storage location. At most one of the dependency comparisons for a given source operand can indicate dependency. According to one embodiment, the reorder buffer employs a line-oriented configuration. Concurrently decoded instructions are stored into a line of storage, and the concurrently decoded instructions are retired as a unit. A last in line (LIL) indication is stored for each instruction in the line. The LIL indication indicates whether or not the instruction is last within the line storing that instruction to update the storage location defined as the destination of that instruction. The LIL indications for a line can be used as write enables for the register file.
Abstract:
A set-associative cache memory configured to use multiple portions of a requested address in parallel to quickly access data from a data array based upon stored way predictions. The cache memory comprises a plurality of memory locations, a plurality of storage locations configured to store way predictions, a decoder, a plurality of pass transistors, and a sense amp unit. A subset of the storage locations are selected according to a first portion of a requested address. The decoder is configured to receive and decode a second portion of the requested address. The decoded portion of the address is used to select a particular subset of the data array based upon the way predictions stored within the selected subset of storage locations. The pass transistors are configured select a second subset of the data array according to a third portion of the requested address. The sense amp unit then reads a cache line from the intersection of the first subset and second subset within the data array.
Abstract:
An apparatus for prediction of loop instructions. Loop instructions decrement the value in a counter register and branch to a target address (specified by an instruction operand) if the decremented value of the counter register is greater than zero. The apparatus comprises a loop detection unit that detects the presence of a loop instruction in the instruction stream. An indication of the loop instruction is conveyed to a reorder buffer which stores speculative register values. If the apparatus is not currently processing the loop instruction, a compare value corresponding to the counter register prior to execution of the loop instruction is conveyed to a loop prediction unit. The loop prediction unit also increments a counter value upon receiving each indication of the loop instruction. This counter value is then compared to the compare value conveyed from the reorder buffer. If the counter value is one less than the compare value, a signal is asserted that indicates that the loop instruction should be predicted not-taken upon a next iteration of the loop. In this manner, loop prediction accuracy may be increased by correctly predicting the loop instruction not-taken. Because loops are commonly found in a variety of applications, increasing the accuracy of loop prediction, even slightly, may have a beneficial effect on performance. The loop operation is particularly important in scientific applications where it may be used to perform various digital signal processing routines and to traverse arrays.
Abstract:
A branch prediction apparatus is provided which stores multiple branch selectors corresponding to instruction bytes within a cache line of instructions or portion thereof. The branch selectors identify a branch prediction to be selected if the corresponding instruction byte is the byte indicated by the offset of the fetch address used to fetch the cache line. Instead of comparing pointers to the branch instructions with the offset of the fetch address, the branch prediction is selected simply by decoding the offset of the fetch address and choosing the corresponding branch selector. The branch prediction apparatus may operate at a higher frequencies (i.e. lower clock cycles) than if the pointers to the branch instruction and the fetch address were compared (a greater than or less than comparison). The branch selectors directly determine which branch prediction is appropriate according to the instructions being fetched, thereby decreasing the amount of logic employed to select the branch prediction.
Abstract:
A branch prediction unit stores a set of branch selectors corresponding to each of a group of contiguous instruction bytes stored in an instruction cache. Each branch selector identifies the branch prediction to be selected if a fetch address corresponding to that branch selector is presented. In order to minimize the number of branch selectors stored for a group of contiguous instruction bytes, the group is divided into multiple byte ranges. The largest byte range may include a number of bytes comprising the shortest branch instruction in the instruction set (exclusive of the return instruction). For example, the shortest branch instruction may be two bytes in one embodiment. Therefore, the largest byte range is two bytes in the example. Since the branch selectors as a group change value (i.e. indicate a different branch instruction) only at the end byte of a predicted-taken branch instruction, fewer branch selectors may be stored than the number of bytes within the group.
Abstract:
An apparatus including address generation units, corresponding reservation stations, and a speculative register file is provided. Decode units provide memory operation information to the corresponding reservation stations while the associated instructions are being decoded. The speculative register file stores speculative register values corresponding to previously decoded instructions. The speculative register values are generated prior to execution of the previously decoded instructions. If the register operands included in the address operands of an instruction are stored in the speculative register file, then the memory operation may be passed through the corresponding reservation station to an address generation unit. The address generation unit generates the data address from the address operands and accesses a data cache while register operands corresponding to the instruction are requested from a register file and reorder buffer.
Abstract:
A superscalar microprocessor is provided that includes a predecode unit configured to predecode variable byte-length instructions prior to their storage within an instruction cache. The predecode unit is configured to generate a plurality of predecode bits for each instruction byte. The plurality of predecode bits associated with each instruction byte include an end bit and an ROP bit that indicates a number of microinstructions required to implement the instruction. The plurality of predecode bits are collectively referred to as a predecode tag. An instruction alignment unit then uses the predecode tags to identify microinstructions. The instruction alignment unit dispatches the microinstructions simultaneously to a plurality of decode units which form fixed issue positions within the superscalar microprocessor. Because the instruction alignment unit identifies microinstructions, the multiplexing of instructions from the instruction alignment unit to the decoders is simplified. Accordingly, relatively fast multiplexing may be attained, and high performance may be accommodated.
Abstract:
A microprocessor configured to detect a memory operation having a predefined data address is provided. The predefined data address indicates that subsequent instructions belong to an alternate instruction set. In one embodiment, a second memory operation having the predefined data address indicates that instructions subsequent to the second memory operation belong to the original instruction set. The memory operations effectively provide a boundary between the instructions from dissimilar instruction sets. Instructions are routed to an execution unit configured to execute the instruction set indicated by the most recently detected memory operation having the predefined address. Each instruction sequence within the program may be coded using the instruction set which most efficiently executes the function corresponding to the instruction sequence. The program may be executed more quickly than an equivalent program coded entirely in either instruction set. In one embodiment, the microprocessor executes the x86 instruction set and the ADSP 2171 instruction set.
Abstract:
A microprocessor employs a local cache for each functional unit, located physically close to that functional unit. The local caches are relatively small as compared to a central cache optionally included in the microprocessor as well. Because the local caches are small, internal interconnection delays within the local caches may be less than those experienced by the central cache. Additionally, the physical proximity of the local cache to the functional unit which accesses the local cache reduces the interconnect delay between the local cache and the functional unit. If the memory operand hits in a remote cache (either a different local cache or the central cache), the cache line containing the memory operand is transferred to the local cache experiencing the miss. According to one embodiment including multiple symmetrical functional units, the local caches coupled to the symmetrical functional units are restricted to storing different cache lines from each other. For example, a number of bits of the tag address may be used to select which of the local caches is to store the corresponding cache line. A data prediction scheme for predicting the functional unit to which a given instruction should be dispatched may be implemented, wherein the prediction is formed based upon the cache line storing the memory operand during a previous execution of the given instruction.
Abstract:
A superscalar microprocessor configured to speculatively generate register values associated with a particular register is provided. Multiple register values are generated in parallel, wherein each speculatively generated register value accounts for modifications of the register value by each of the instructions prior to the instruction for which the register value is generated. Instructions which are dependent upon each other for the register values thus generated may be executed concurrently. In one specific embodiment, the present microprocessor generates register values for the ESP register. The speculatively generated register value resulting from the modifications performed by the instructions decoded during a clock cycle is stored in a speculative register file along with constants used to generate the register value associated with each individual instruction. When a mispredicted branch instruction is detected, the register value generated during the decode of the mispredicted branch instruction may be adjusted using the stored constants. The adjustment performed reflects the value of the register at the execution of the mispredicted branch instruction.