摘要:
A method and logical apparatus for managing processing system resource use for speculative execution reduces the power and performance burden associated with inefficient speculative execution of program instructions. A measure of the efficiency of speculative execution is used to reduce resources allocated to a thread while the speculation efficiency is low. The resource control applied may be the number of instruction fetches allocated to the thread or the number of execution time slices. Alternatively, or in combination, the size of a prefetch instruction storage allocated to the thread may be limited. The control condition may be comparison of the number of correct or incorrect speculations to a threshold, comparison of the number of correct to incorrect speculations, or a more complex evaluator such as the size of a ratio of incorrect to total speculations.
摘要:
The invention relates to a method and apparatus for execution scheduling of a program thread of an application program and executing the scheduled program thread on a data processing system. The method includes: providing an application program thread priority to a thread execution scheduler; selecting for execution the program thread from a plurality of program threads inserted into the thread execution queue, wherein the program thread is selected for execution using a round-robin selection scheme, and wherein the round-robin selection scheme selects the program thread based on an execution priority associated with the program thread bit; placing the program thread in a data processing execution queue within the data processing system; and removing the program thread from the thread execution queue after a successful execution of the program thread by the data processing system.
摘要:
An improved method, apparatus, and computer instructions for grouping instructions processed in equal sized sets. A current set of instructions is received in an instruction cache for dispatching. A determination is made as to whether any instructions in the current set of instructions are part of a group including a prior set of instructions received in the instruction cache including using a history data structure, wherein the history data structure contains data regarding instructions in the prior set of instructions. Any instructions are grouped into the group with the instruction in response to a determination that the any instructions are part of the group. Instructions in the group units are dispatched to execution using the history data structure, wherein invalid instruction dispatch groupings are avoided.
摘要:
A system and method for providing error detection in a directory has been disclosed. The method and system include comparing a first group of data and a second group of data to provide a comparison signal and detecting errors in response to the comparison signal.
摘要:
A method and apparatus for managing register renaming in an information handling system that supports out-of-order and speculative instruction execution. Register entries are stored in architected registers, and rename entries are stored in rename registers. Register and rename entries are transferred between the rename and architected registers in response to dispatch of instructions or upon the occurrence of a canceling event, such as a mispredicted branch instruction or an interrupt condition. Rename entries are stored in the rename registers in a round robin fashion and tagged by age. Age order of the rename entries is determined without keeping the rename entries in age order through the method of shifting rename entries to the next rename register each time a rename entry is removed from a rename register. By eliminating shifting of rename entries to maintain age order, a power savings is realized.
摘要:
A method, apparatus, and computer program product are disclosed for testing a data processing system's ability to recover from cache directory errors. A directory entry is stored into a cache directory. The directory entry includes an address tag and directory parity that is associated with that address tag. A cache entry is stored into a cache that is accessed using the cache directory. The cache entry includes information and cache parity that is associated with that information. The directory parity is altered to imply bad parity. The bad parity implies that the address tag that is associated with this parity is invalid. The information included in the cache entry is altered to be incorrect information. However, although the information is now incorrect, the cache parity continues to imply good parity which implies that the data is good. This good parity implies that the information that is associated with the parity is valid, even though it is not. The data processing system's ability to recover from errors is tested using the directory entry and the cache entry.
摘要:
A set-associative I-cache that enables early cache hit prediction and correct way selection when the processor is executing instructions of multiple threads having similar EAs. Each way of the I-cache comprises an EA Directory (EA Dir), which includes a series of thread valid bits that are individually assigned to one of the multiple threads. Particular ones of the thread valid bits are set in each EA Dir to indicate when an instruction block the thread is cached within the particular way with which the EA Dir is associated. When a cache line request for a particular thread is received, a cache hit is predicted when the EA of the request matches the EA in the EA Dir and the cache line is selected from the way associated with the EA Dir who has the thread valid bit for that thread set. Early way selection is thus achieved since the way selection only requires a check of the thread valid bits.
摘要:
A method of prefetching addresses includes the step of accessing a stored instruction using a current address. During the access using the current address, a target address is accessed in a branch target address cache. A stored instruction associated with the target address accessed from the branch target address cache is prefetched and the branch target address is indexed with selected bits from the address accessed from the branch target address cache.
摘要:
Improved conditional branch instruction prediction by detecting branch aliasing in a branch history table. Each entry in an aliasing table is associated with only one of a plurality of conditional branch instructions tracked by the branch history table. Prior to executing a conditional branch instruction, outcome of the execution of the conditional branch instruction is predicted utilizing the branch history table entry associated with the conditional branch instruction. Outcome of the execution of the conditional branch instruction is also predicted utilizing the aliasing table entry associated with the conditional branch instruction. Branch aliasing is detected by comparing the prediction made utilizing the branch history table with the prediction made utilizing the aliasing table. In response to the predictions being different, a determination is made that branch aliasing occurred, and the prediction made utilizing the aliasing table is utilized for predicting the outcome of the execution of the conditional branch instruction.
摘要:
A method and apparatus for updating a branch history table (BHT) in a processor which resolves multiple branches in a single cycle is disclosed. The method and apparatus utilizes a single write ported BHT that achieves similar performance to a two write ported BHT by selecting only data corresponding to one of the branch instructions for updating the BHT. The data corresponding to the branch instruction for updating the BHT is selected based upon whether a prediction of the instruction path set by the branch instruction was correctly predicted and the state of a corresponding saturation up-down counter in the BHT.