Abstract:
A cache subsystem is disclosed. The cache subsystem includes a cache configured to store information in cache lines arranged in a plurality of ways. A requestor circuit generates a request to access a particular cache line in the cache. A prediction circuit is configured to generate a prediction of which of the ways includes the particular cache line. A comparison circuit verifies the prediction by comparing a particular address tag associated with the particular cache line to a cache tag corresponding to a predicted one of the ways. Responsive to determining that the prediction was correct, a confirmation indication is stored indicating the correct prediction. For a subsequent request for the particular cache line, the cache is configured to forego a verification of the prediction that the particular cache line is included in the one of the ways based on the confirmation indication.
Abstract:
A cache subsystem is disclosed. The cache subsystem includes a cache configured to store information in cache lines arranged in a plurality of ways. A requestor circuit generates a request to access a particular cache line in the cache. A prediction circuit is configured to generate a prediction of which of the ways includes the particular cache line. A comparison circuit verifies the prediction by comparing a particular address tag associated with the particular cache line to a cache tag corresponding to a predicted one of the ways. Responsive to determining that the prediction was correct, a confirmation indication is stored indicating the correct prediction. For a subsequent request for the particular cache line, the cache is configured to forego a verification of the prediction that the particular cache line is included in the one of the ways based on the confirmation indication.
Abstract:
Techniques are disclosed relating to power reduction during execution of instruction loops. Multiple different power saving modes may be used by a processor, such as a first power saving mode after only a few loop iterations (e.g., 2-3) and a second, deeper power saving mode after a greater number of loop iterations. The first power saving mode may include keeping a branch predictor and/or other structures active, but the second power saving mode may include reducing power to the branch predictor and/or other structures. An observation mode and an instruction capture mode may also be used by a processor prior to entering a power saving mode for loop execution. Power saving modes may also be achieved during execution of complex loops having multiple backward branches (e.g., nested loops).
Abstract:
Techniques are disclosed relating to a cache for patterns of instructions. In some embodiments, an apparatus includes an instruction cache and is configured to detect a pattern of execution of instructions by an instruction processing pipeline. The pattern of execution may involve execution of only instructions in a particular group of instructions. The instructions may include multiple backward control transfers and/or a control transfer instruction that is taken in one iteration of the pattern and not taken in another iteration of the pattern. The apparatus may be configured to store the instructions in the instruction cache and fetch and execute the instructions from the instruction cache. The apparatus may include a branch predictor dedicated to predicting the direction of control transfer instructions for the instruction cache. Various embodiments may reduce power consumption associated with instruction processing.
Abstract:
Various embodiments of a method and apparatus for flushing a cache are disclosed. In a system, a cache memory is accessible by an execution circuit. The execution circuit executes instructions and may utilize data and/or instructions stored in the cache. A flush circuit is also coupled to the cache. Responsive to execution of a power down instruction by the execution circuit, the flush circuit performs a cache flush. If a control state is asserted in a control register, the flush circuit generates a dummy event upon completing the cache flush. Responsive to generating the dummy event, a processor core that includes the execution circuit is inhibited from being powered down.
Abstract:
A mechanism for reducing power consumption of a cache memory of a processor includes a processor with a cache memory that stores instruction information for one or more instruction fetch groups fetched from a system memory. The cache memory may include a number of ways that are each independently controllable. The processor also includes a way prediction unit. The way prediction unit may enable, in a next execution cycle, a given way within which instruction information corresponding to a target of a next branch instruction is stored in response to a branch taken prediction for the next branch instruction. The way prediction unit may also, in response to the branch taken prediction for the next branch instruction, enable, one at a time, each corresponding way within which instruction information corresponding to respective sequential instruction fetch groups that follow the next branch instruction are stored.
Abstract:
A mechanism for reducing power consumption of a cache memory of a processor includes a processor with a cache memory that stores instruction information for one or more instruction fetch groups fetched from a system memory. The cache memory may include a number of ways that are each independently controllable. The processor also includes a way prediction unit. The way prediction unit may enable, in a next execution cycle, a given way within which instruction information corresponding to a target of a next branch instruction is stored in response to a branch taken prediction for the next branch instruction. The way prediction unit may also, in response to the branch taken prediction for the next branch instruction, enable, one at a time, each corresponding way within which instruction information corresponding to respective sequential instruction fetch groups that follow the next branch instruction are stored.
Abstract:
Disclosed techniques relate to trace cache circuitry configured to identify and cache traces that satisfy certain criteria. Prediction circuitry may track directions of executed control transfer instructions, including a first category of control transfer instructions that meet a first threshold bias level toward a given direction (which may be referred to as “stable”) and a second category of control transfer instructions that do not meet the first threshold bias level (which may be referred to as “unstable”). Trace cache circuitry may identify traces of instructions that satisfy a set of criteria, including: only control transfer instructions of the first category are allowed as internal control transfer instructions and a control transfer instruction in the second category is allowed only at an end of a given trace. Disclosed techniques may advantageously provide performance and power advantages of trace caching with reduced complexity, relative to certain traditional trace caches.
Abstract:
An embodiment of an apparatus includes a processing circuit and a system memory. The processing circuit may store a pending request in a buffer, the pending request corresponding to a transaction that includes a write request to the system memory. The processing circuit may also allocate an entry in a write table corresponding the transaction. After sending the transaction to the system memory to be processed, the pending request in the buffer may be removed in response to the allocation of the write entry.
Abstract:
A cache subsystem is disclosed. The cache subsystem includes a cache configured to store information in cache lines arranged in a plurality of ways. A requestor circuit generates a request to access a particular cache line in the cache. A prediction circuit is configured to generate a prediction of which of the ways includes the particular cache line. A comparison circuit verifies the prediction by comparing a particular address tag associated with the particular cache line to a cache tag corresponding to a predicted one of the ways. Responsive to determining that the prediction was correct, a confirmation indication is stored indicating the correct prediction. For a subsequent request for the particular cache line, the cache is configured to forego a verification of the prediction that the particular cache line is included in the one of the ways based on the confirmation indication.