Abstract:
A processing system [100] selects entries for eviction at one cache [130] based at least in part on the validity status of corresponding entries at a different cache [140]. The processing system includes a memory hierarchy having at least two caches, a higher level cache [140] and a lower level cache [130]. The lower level cache monitors which locations of the higher level cache have been indicated as invalid and, when selecting an entry of the lower level cache for eviction to the higher level cache, selects the entry based at least in part on whether the selected cache entry will be stored at an invalid cache line of the higher level cache.
Abstract:
Systems, apparatuses, and methods for generating a measurement of write memory bandwidth are disclosed. A control unit monitors writes to a cache hierarchy. If a write to a cache line is a first time that the cache line is being modified since entering the cache hierarchy, then the control unit increments a write memory bandwidth counter. Otherwise, if the write is to a cache line that has already been modified since entering the cache hierarchy, then the write memory bandwidth counter is not incremented. The first write to a cache line is a proxy for write memory bandwidth since this will eventually cause a write to memory. The control unit uses the value of the write memory bandwidth counter to generate a measurement of the write memory bandwidth. Also, the control unit can maintain multiple counters for different thread classes to calculate the write memory bandwidth per thread class.
Abstract:
A processor [101] applies a transfer policy [111, 112] to a portion [118] of a cache [110] based on access metrics for different test regions [115, 116] of the cache, wherein each test region applies a different transfer policy for data in cache entries that were stored in response to a prefetch requests but were not the subject of demand requests. One test region applies a transfer policy under which unused prefetches are transferred to a higher level cache in a cache hierarchy upon eviction from the test region of the cache. The other test region applies a transfer policy under which unused prefetches are replaced without being transferred to a higher level cache (or are transferred to the higher level cache but stored as invalid data) upon eviction from the test region of the cache.
Abstract translation:处理器[101]基于针对高速缓存的不同测试区域[115,116]的访问度量将传输策略[111,112]应用于高速缓存[110]的一部分[118] ,其中每个测试区域对缓存条目中的数据应用不同的传输策略,所述缓存条目响应于预取请求而被存储,但不是请求请求的主题。 一个测试区域应用传输策略,在该传输策略下,将未使用的预取从高速缓存的测试区域逐出后,传输到高速缓存分层结构中的更高级别高速缓存。 另一个测试区域应用传输策略,在该传输策略下,在从缓存的测试区域逐出时,未使用的预取被替换而不被传输到更高级别的缓存(或者被传输到更高级别的缓存但被存储为无效数据)。 p >
Abstract:
A processing system (100, 300) includes a cache (300) that includes cache lines (315) that are partitioned into a first subset (320) of the cache lines and second subsets (320) of the cache lines. The processing system also includes one or more counters (330) that are associated with the second subsets of the cache lines. The processing system further includes a processor (305) configured to modify the one or more counters in response to a cache hit or a cache miss associated with the second subsets. The one or more counters are modified by an amount determined by one or more characteristics of a memory access request that generated the cache hit or the cache miss.
Abstract:
Systems, apparatuses, and methods for arbitrating threads in a computing system are disclosed. A computing system includes a processor with multiple cores, each capable of simultaneously processing instructions of multiple threads. When a thread throttling unit receives an indication that a shared cache has resource contention, the throttling unit sets a threshold number of cache misses for the cache. If the number of cache misses exceeds this threshold, then the throttling unit notifies a particular upstream computation unit to throttle the processing of instructions for the thread. After a time period elapses, if the cache continues to exceed the threshold, then the throttling unit notifies the upstream computation unit to more restrictively throttle the thread by performing one or more of reducing the selection rate and increasing the time period. Otherwise, the unit notifies the upstream computation unit to less restrictively throttle the thread.
Abstract:
Systems, apparatuses, and methods for routing interrupts on a coherency probe network are disclosed. A computing system includes a plurality of processing nodes, a coherency probe network, and one or more control units. The coherency probe network carries coherency probe messages between coherent agents. Interrupts that are detected by a control unit are converted into messages that are compatible with coherency probe messages and then routed to a target destination via the coherency probe network. Interrupts are generated with a first encoding while coherency probe messages have a second encoding. Cache subsystems determine whether a message received via the coherency probe network is an interrupt message or a coherency probe message based on an encoding embedded in the received message. Interrupt messages are routed to interrupt controller(s) while coherency probe messages are processed in accordance with a coherence probe action field embedded in the message.
Abstract:
A processor (100) includes an operations scheduler (105) to schedule execution of operations at, for example, a set of execution units (110) or a cache of the processor. The operations scheduler periodically adds sets of operations to a tracking array (120), and further identifies when an operation in the tracked set is blocked from execution scheduling in response to, for example, identifying that the operation is dependent on another operation that has not completed execution. The processor further includes a counter (130) that is adjusted each time an operation in the tracking array is blocked from execution, and is reset each time an operation in the tracking array is executed. When the value of the counter exceeds a threshold (135), the operations scheduler prioritizes the remaining tracked operations for execution scheduling.
Abstract:
A cache [120] stores, along with data [170] that is being transferred from a higher level cache [140] to a lower level cache, information [171] indicating the higher level cache location from which the data was transferred. Upon receiving a request for data that is stored at the location in the higher level cache, a cache controller[130] stores the higher level cache location information in a status tag of the data. The cache controller then transfers the data with the status tag indicating the higher level cache location to a lower level cache. When the data is subsequently updated or evicted from the lower level cache, the cache controller reads the status tag location information and transfers the data back to the location in the higher level cache from which it was originally transferred.
Abstract:
Systems, apparatuses, and methods for dynamically adjusting cache policies to reduce execution core wait time are disclosed. A processor includes a cache subsystem. The cache subsystem includes one or more cache levels and one or more cache controllers. A cache controller partitions a cache level into two test portions and a remainder portion. The cache controller applies a first policy to the first test portion and applies a second policy to the second test portion. The cache controller determines the amount of time the execution core spends waiting on accesses to the first and second test portions. If the measured wait time is less for the first test portion than for the second test portion, then the cache controller applies the first policy to the remainder portion. Otherwise, the cache controller applies the second policy to the remainder portion.
Abstract:
A processing system [100] indicates the pendency of a memory access request [102] for data at the cache entry that is assigned to store the data in response to the memory access request. While executing instructions, the processor issues requests for data to the cache [140] most proximal to the processor. In response to a cache miss, the cache controller identifies an entry [245] of the cache to store the data in response to the memory access request, and stores an indication [147] that the memory access request is pending at the identified cache entry. If the cache controller receives a subsequent memory access request for the data while the memory access request is pending at the higher level of the memory hierarchy, the cache controller identifies that the memory access request is pending based on the indicator stored at the entry.