摘要:
In one embodiment, the present invention includes a method for determining if an entry corresponding to a prediction address is present in a first predictor, and overriding a prediction output from a second predictor corresponding to the prediction address if the entry is present in the first predictor. Other embodiments are described and claimed.
摘要:
A system and method of managing processor instructions provides enhanced performance. The system and method provide for decoding a first instruction into a plurality of operations with a decoder. A first copy of the operations is passed from the decoder to a build engine associated with a trace cache. The system and method further provide for passing a second copy of the operation from the decoder directly to a back end allocation module such that the operations bypass the build engine and the allocation module is in a decoder reading state.
摘要:
Embodiments of the present invention provide an apparatus, system, and method of routing a source operand. Some demonstrative embodiments my include replacing a source operand of a micro operation to be executed by an execution unit with a value type representing a source value, e.g., if the source operand corresponds to the source value. Other embodiments are described and claimed.
摘要:
Fusing micro-operations (uops) together. Intra-instruction fusing can increase cache memory storage efficiency and computer instruction processing bandwidth within a microprocessor without incurring significant computer system cost. Uops are fused, stored in a cache memory, un-fused, executed in parallel, and retired in order to optimize cost and performance.
摘要:
A system and method for compensating for branching instructions in trace caches is disclosed. A branch predictor uses the branching behavior of previous branching instructions to select between several traces beginning at the same linear instruction pointer (LIP) or instruction. The fetching mechanism of the processor selects the trace that most closely matches the previous branching behavior. In one embodiment, a new trace is generated only if a divergence occurs within a predetermined location. A divergence is a branch that is recorded as following one path (i.e. taken) and during execution follows a different path (i.e. not taken).
摘要:
Independent power control of two or more processing cores. More particularly, at least one embodiment of the invention pertains to a technique to place at least one processing core in a power state without coordinating with the power state of one or more other processing cores.
摘要:
Methods and apparatus for restoring a meta predictor system upon detecting a branch or binary misprediction, are disclosed. An example apparatus may include a base misprediction history register to store a set of misprediction history values each indicating whether a previous branch prediction taken by a previous branch instruction was predicted correctly or incorrectly. The apparatus may comprise a meta predictor to detect a branch misprediction of a current branch prediction based at least in part on an output of the base misprediction history register. The meta predictor may restore the base misprediction history register based on the detecting of the branch misprediction. Additional apparatus, systems, and methods are disclosed.
摘要:
Independent power control of two or more processing cores. More particularly, at least one embodiment of the invention pertains to a technique to place at least one processing core in a power state without coordinating with the power state of one or more other processing cores.
摘要:
Method, apparatus, and system for tracking call returns. At least one embodiment maps the locations of a return instruction pointer within a speculative return stack buffer and a committed return stack buffer to determine a return stack buffers from which the return instruction pointer should be retrieved.
摘要:
A method and apparatus for a loop predictor for predicting the end of a loop is disclosed. In one embodiment, the loop predictor may have a predict counter to hold a predict count representing the expected number of times that a predictor stew value will repeat during the execution of a given loop. The loop predictor may also have one or more running counters to hold a count of the times that the stew value has repeated during the execution of the present loop. When the counter values match the predictor may issue a prediction that the loop will end.