摘要:
The illustrative embodiments provide a method, apparatus, and computer program product for managing a number of cache lines in a cache. In one illustrative embodiment, it is determined whether activity on a memory bus in communication with the cache exceeds a threshold activity level. A least important cache line is located in the cache responsive to a determination that the threshold activity level is exceeded, wherein the least important cache line is located using a cache replacement scheme. It is determined whether the least important cache line is clean responsive to the determination that the threshold activity level is exceeded. The least important cache line is selected for replacement in the cache responsive to a determination that the least important cache line is clean. A clean cache line is located within a subset of the number of cache lines and selecting the clean cache line for replacement responsive to an absence of a determination that the least important cache line is not clean, wherein the each cache line in the subset is examined in ascending order of importance according to the cache replacement scheme.
摘要:
A cache memory including a plurality of sets of cache lines, and providing an implementation for increasing the associativity of selected sets of cache lines including the combination of providing a group of parameters for determining the worthiness of a cache line stored in a basic set of cache lines, providing a partner set of cache lines, in the cache memory, associated with the basic set, applying the group of parameters to determine the worthiness level of a cache line in the basic set and responsive to a determination of a worthiness in excess of a predetermined level, for a cache line, storing said worthiness level cache line in said partner set.
摘要:
An adaptive prediction threshold scheme for dynamically adjusting prediction thresholds of entries in a Pattern History Table (PHT) by observing global tendencies of the branch or branches that index into the PHT entries. A count value of a prediction state counter representing a prediction state of a prediction state machine for a PHT entry is obtained. Count values in a set of counters allocated to the entry in the PHT are changed based on the count value of the entry's prediction state counter. The prediction threshold of the prediction state machine for the entry may then be adjusted based on the changed count values in the set of counters, wherein the prediction threshold is adjusted by changing a count value in a prediction threshold counter in the entry, and wherein adjusting the prediction threshold redefines predictions provided by the prediction state machine.
摘要:
Embodiments that dynamically reorganize data of cache lines in non-uniform cache access (NUCA) caches are contemplated. Various embodiments comprise a computing device, having one or more processors coupled with one or more NUCA cache elements. The NUCA cache elements may comprise one or more banks of cache memory, wherein ways of the cache are horizontally distributed across multiple banks. To improve access latency of the data by the processors, the computing devices may dynamically propagate cache lines into banks closer to the processors using the cache lines. To accomplish such dynamic reorganization, embodiments may maintain “direction” bits for cache lines. The direction bits may indicate to which processor the data should be moved. Further, embodiments may use the direction bits to make cache line movement decisions.
摘要:
Reclaiming checkpoints in a system in an order that differs from the order when the checkpoints are created. Reclaiming the checkpoints includes: creating one or more checkpoints, each of which having an initial state using system resources and holding the checkpoints state; identifying the completion of all the instructions associated with the checkpoint; reassigning all the instructions associated with the identified checkpoint to an immediately preceding checkpoint; and freeing the resources associated with the identified checkpoint. The checkpoint is created when the instruction that is checked is a conditional branch having a direction that cannot be predicted with a predetermined confidence level.
摘要:
Reclaiming checkpoints in a system in an order that differs from the order when the checkpoints are created. Reclaiming the checkpoints includes: creating one or more checkpoints, each of which having an initial state using system resources and holding the checkpoints state; identifying the completion of all the instructions associated with the checkpoint; reassigning all the instructions associated with the identified checkpoint to an immediately preceding checkpoint; and freeing the resources associated with the identified checkpoint. The checkpoint is created when the instruction that is checked is a conditional branch having a direction that cannot be predicted with a predetermined confidence level.
摘要:
Embodiments that dynamically reorganize data of cache lines in non-uniform cache access (NUCA) caches are contemplated. Various embodiments comprise a computing device, having one or more processors coupled with one or more NUCA cache elements. The NUCA cache elements may comprise one or more banks of cache memory, wherein ways of the cache are horizontally distributed across multiple banks. To improve access latency of the data by the processors, the computing devices may dynamically propagate cache lines into banks closer to the processors using the cache lines. To accomplish such dynamic reorganization, embodiments may maintain “direction” bits for cache lines. The direction bits may indicate to which processor the data should be moved. Further, embodiments may use the direction bits to make cache line movement decisions.
摘要:
A processing system includes a memory and a first core configured to process applications. The first core includes a first cache. The processing system includes a mechanism configured to capture a sequence of addresses of the application that miss the first cache in the first core and to place the sequence of addresses in a storage array; and a second core configured to process at least one software algorithm. The at least one software algorithm utilizes the sequence of addresses from the storage array to generate a sequence of prefetch addresses. The second core issues prefetch requests for the sequence of the prefetch addresses to the memory to obtain prefetched data and the prefetched data is provided to the first core if requested.
摘要:
The illustrative embodiments provide a method, a computer program product, and an apparatus for managing a cache. A probability of a future request for data to be stored in a portion of the cache by a thread is identified for each of the number of threads to form a number of probabilities. The data is stored with a rank in a number of ranks in the portion of the cache responsive to receiving the future request from the thread in the number of threads for the data. The rank is selected using the probability in the number of probabilities for the thread.
摘要:
The illustrative embodiments provide a method, a computer program product, and an apparatus for managing a cache. A probability of a future request for data to be stored in a portion of the cache by a thread is identified for each of the number of threads to form a number of probabilities. The data is stored with a rank in a number of ranks in the portion of the cache responsive to receiving the future request from the thread in the number of threads for the data. The rank is selected using the probability in the number of probabilities for the thread.