摘要:
An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides a single dispatch point into the data flow to the dual cache banks of the L2 cache memory.
摘要:
An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. When the L2 cache memory finishes servicing the interrupting load request, the L2 cache memory may return to servicing the interrupted store request at the point of interruption. The control logic determines the size requirement of each load operation or store operation. When the cache memory system performs a store operation or load operation, the memory system accesses the portion of a cache line it needs to perform the operation instead of accessing an entire cache line.
摘要:
A computer implemented method, a processor chip, a data processing system, and computer program product in a data processing system process information in a store cache of a data processing system. The store cache receives a first entry that includes a first address indicating a first segment of a cache line. The store cache then receives a second entry including a second address indicating a second segment of the cache line. Responsive to the first segment not being equal to the second segment, the first entry is chained to the second entry.
摘要:
A system and method for cache management in a data processing system having a memory hierarchy of upper memory and lower memory cache. A lower memory cache controller accesses a coherency state table to determine replacement policies of coherency states for cache lines present in the lower memory cache when receiving a cast-in request from one of the upper memory caches. The coherency state table implements a replacement policy that retains the more valuable cache coherency state information between the upper and lower memory caches for a particular cache line contained in both levels of memory at the time of cast-out from the upper memory cache.
摘要:
A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first cache directory to access the first cache array slice while using a second cache directory to access the second cache array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In the illustrative embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. An address tag associated with a load request is transmitted from the processor core with a designated bit that associates the address tag with only one of the cache array slices whose corresponding directory determines whether the address tag matches a currently valid cache entry. The cache array may be arranged with rows and columns of cache sectors wherein a given cache line is spread across sectors in different rows and columns, with at least one portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency. The cache array outputs different sectors of the given cache line in successive clock cycles based on the latency of a given sector.
摘要:
A cache memory logically partitions a cache array into at least two slices each having a plurality of cache lines, with a given cache line spread across two or more cache ways of contiguous bytes and a given cache way shared between the two cache slices, and if one a cache way is defective that is part of a first cache line in the first cache slice and part of a second cache line in the second cache slice, it is disabled while continuing to use at least one other cache way which is also part of the first cache line and part of the second cache line. In the illustrative embodiment the cache array is set associative and at least two different cache ways for a given cache line contain different congruence classes for that cache line. The defective cache way can be disabled by preventing an eviction mechanism from allocating any congruence class in the defective way. For example, half of the cache line can be disabled (i.e., half of the congruence classes). The cache array may be arranged with rows and columns of cache sectors (rows corresponding to the cache ways) wherein a given cache line is further spread across sectors in different rows and columns, with at least one portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency. The cache array can also output different sectors of the given cache line in successive clock cycles based on the latency of a given sector.
摘要:
A method and system are disclosed for managing saved process states in a memory of a data processing system that has multiple partitions executing independent operating systems. A hypervisor manager affords access to any processor in the data processing system for the purpose of storing process states for that processor to the memory, independent of the operating system running on the processor.
摘要:
A method and system are disclosed for pre-loading a hard architected state of a next process from a pool of idle processes awaiting execution. When an executing process is interrupted on the processor, a hard architected state, which has been pre-stored in the processor, of a next process is loaded into architected storage locations in the processor. The next process to be executed, and thus its corresponding hard architected state that is pre-stored in the processor, are determined based on priorities assigned to the waiting processes.
摘要:
A multiprocessor data processing system requires careful management to maintain cache coherency. In conventional systems using a MESI approach, two or more processors will often compete for ownership of a common cache line. As a result, ownership of the cache line will frequently “bounce” between multiple processors, which causes a significant reduction in cache efficiency. The preferred embodiment provides a modified MESI state which holds the status of the cache line static for a fixed period of time, which eliminates the bounce effect from contention between multiple processors.
摘要:
A multiprocessor data processing system requires careful management to maintain cache coherency. Conventional systems using a MESI approach sacrifice some performance with inefficient lock-acquisition and lock-retention techniques. The disclosed system provides additional cache states, indicator bits, and lock-acquisition routines to improve cache performance. In particular, as multiple processors compete for the same cache line, a significant amount of processor time is lost determining if another processor's cache line lock has been released and attempting to reserve that cache line while it is still owned by the other processor. The preferred embodiment provides an additional cache state which specifically indicates that a processor has released its lock on a cache line after it has performed any necessary modifications.