Adaptive granularity for reducing cache coherence overhead
Abstract:
To reduce overhead for cache coherence for shared cache in multi-processor systems, adaptive granularity allows tracking shared data at a coarse granularity and unshared data at fine granularity. Processes for adaptive granularity select how large of an entry is required to track the coherence of a block based on its state. Shared blocks are tracked in coarse-grained region entries that include a sharer tracking bit vector and a bit vector that indicates which blocks are likely to be present in the system, but do not identify the owner of the block. Modified/unshared data is tracked in fine-grained entries that permit ownership tracking and exact location and invalidation of cache. Large caches where the majority of blocks are shared and not modified create less overhead by being tracked in the less costly coarse-grained region entries.
Public/Granted literature
Information query
Patent Agency Ranking
0/0