摘要:
An instruction fetch unit that employs sequential way prediction. The instruction fetch unit comprises a control unit configured to convey a first index and a first way to an instruction cache in a first clock cycle. The first index and first way select a first group of contiguous instruction bytes within the instruction cache, as well as a corresponding branch prediction block. The branch prediction block is stored in a branch prediction storage, and includes a predicted sequential way value. The control unit is further configured to convey a second index and a second way to the instruction cache in a second clock cycle succeeding the first clock cycle. This second index and second way select a second group of contiguous instruction bytes from the instruction cache. The second way is selected to be the predicted sequential way value stored in the branch prediction block corresponding to the first group of contiguous instruction bytes in response to a branch prediction algorithm employed by the control unit predicting a sequential execution path. Advantageously, a set associative instruction cache utilizing this method of way prediction may operate at higher frequencies (i.e., lower clock cycles) than if tag comparison were used to select the correct way.
摘要:
A superscalar microprocessor is provided employing a way prediction unit which predicts the next fetch address as well as the way of the instruction cache that the current fetch address hits in while the instructions associated with the current fetch are being read from the instruction cache. The microprocessor may achieve high frequency operation while using an associative instruction cache. An instruction fetch can be made every clock cycle using the predicted fetch address from the way prediction unit until an incorrect next fetch address or an incorrect way is predicted. The instructions from the predicted way are provided to the instruction processing pipelines of the superscalar microprocessor each clock cycle.
摘要:
A superscalar microprocessor configured to speculatively generate register values associated with a particular register is provided. Multiple register values are generated in parallel, wherein each speculatively generated register value accounts for modifications of the register value by each of the instructions prior to the instruction for which the register value is generated. Instructions which are dependent upon each other for the register values thus generated may be executed concurrently. In one specific embodiment, the present microprocessor generates register values for the ESP register. The speculatively generated register value resulting from the modifications performed by the instructions decoded during a clock cycle is stored in a speculative register file along with constants used to generate the register value associated with each individual instruction. When a mispredicted branch instruction is detected, the register value generated during the decode of the mispredicted branch instruction may be adjusted using the stored constants. The adjustment performed reflects the value of the register at the execution of the mispredicted branch instruction.
摘要:
A data memory unit having a load/store unit and a data cache is provided which allows store instructions that are part of a load-op-store instruction to be executed with one access to a data cache. The load/store unit is configured with a load/store buffer having a checked bit and a way field for each buffer storage location. For load-op-store instructions, the checked bit associated with the store portion of the of the instruction is set when the load portion of the instruction accesses and hits the data cache. Also, the way field associated with the store portion is set to the way of the data cache in which the load portion hits. The data cache is configured with a locking mechanism for each cache line stored in the data cache. When the load portion of a load-op-store instruction is executed, the associated line is locked such that the line will remain in the data cache until a store instruction executes. In this way, the store portion of the load-op-store instruction is guaranteed to hit the data cache. The store may then store its data into the data cache without first performing a read cycle to determine if the store address hits the data cache.
摘要:
An instruction fetch unit that employs sequential way prediction. The instruction fetch unit comprises a control unit configured to convey a first index and a first way to an instruction cache in a first clock cycle. The first index and first way select a first group of contiguous instruction bytes within the instruction cache, as well as a corresponding branch prediction block. The branch prediction block is stored in a branch prediction storage, and includes a predicted sequential way value. The control unit is further configured to convey a second index and a second way to the instruction cache in a second clock cycle succeeding the first clock cycle. This second index and second way select a second group of contiguous instruction bytes from the instruction cache. The second way is selected to be the predicted sequential way value stored in the branch prediction block corresponding to the first group of contiguous instruction bytes in response to a branch prediction algorithm employed by the control unit predicting a sequential execution path. Advantageously, a set associative instruction cache utilizing this method of way prediction may operate at higher frequencies (i.e., lower clock cycles) than if tag comparison were used to select the correct way.
摘要:
A predecode unit within a microprocessor predecodes a cache line of instruction bytes for storage within the instruction cache of the microprocessor. The predecode unit produces multiple shift amounts, each of which identify the beginning of a particular instruction within the instruction cache line. The shift amounts are stored in the instruction cache with the instruction bytes, and are conveyed when the instruction bytes are fetched for execution by the microprocessor. An instruction alignment unit decodes the shift amounts to locate instructions within the fetched instruction bytes. Each shift amount directly identifies a corresponding instruction for dispatch, and therefore decoding the shift amount directly results in controls for shifting the instruction bytes such that the identified instruction is conveyed to a corresponding issue position. The number of shift amounts stored may be equal to the number of issue positions within the microprocessor. The instruction alignment unit scans the start and end byte predecode data (which is also provided by the predecode unit and stored in the instruction cache) to detect any additional instructions within the cache line (e.g. instructions not identified by the shift amounts). Additional shift amounts are generated and used by the instruction alignment unit to dispatch instructions during subsequent clock cycles.
摘要:
A way prediction unit for a superscalar microprocessor is provided which predicts the next fetch address as well as the way of the instruction cache that the current fetch address hits in while the instructions associated with the current fetch are being read from the instruction cache. The way prediction unit is intended for high frequency microprocessors in which associative caches tend to be clock cycle limiting, causing the instruction fetch mechanism to require more than one clock cycle between fetch requests. Therefore, an instruction fetch can be made every clock cycle using the predicted fetch address until an incorrect next fetch address or an incorrect way is predicted. The instructions from the predicted way are provided to the instruction processing pipelines of the superscalar microprocessor each clock cycle.
摘要:
A microprocessor including a reorder buffer configured to store speculative register values regarding a particular register is provided. One value is stored for each set of concurrently decoded instructions which are outstanding within the microprocessor, reflecting the updates of each instruction within the set which updates the register. Additionally, the reorder buffer stores a set of constants indicative of the modification of the register by each instruction within the set of concurrently decoded instructions. Recovery from a mispredicted branch instruction (or from an instruction which causes an exception, a TRAP instruction, or an interrupt) may be achieved by utilizing the constants to adjust the result generated for the set of concurrently decoded instructions including the mispredicted branch instruction. The constants generated to indicate the modifications of the particular register may additionally allow multiple instructions having a dependency for the particular register to execute in parallel.
摘要:
A load/store buffer is provided which allows both load memory operations and store memory operations to be stored within it. Because each storage location may contain either a load or a store memory operation, the number of available storage locations for load memory operations is maximally the number of storage locations in the entire buffer. Similarly, the number of available storage locations for store memory operations is maximally the number of storage locations in the entire buffer. This invention improves use of silicon area for load and store buffers by implementing, in a smaller area, a performance-equivalent alternative to the separate load and store buffer approach previously used in many superscalar microprocessors.
摘要:
A microprocessor system is disclosed that includes a first data cache that is shared by a first group of one or more program threads in a multi-thread mode and used by one program thread in a single-thread mode. A second data cache is shared by a second group of one or more program threads in the multi-thread mode and is used as a victim cache for the first data cache in the single-thread mode.