摘要:
A cache architecture for a multiprocessor data processing system. The cache architecture includes multiple first-level caches, two second-level caches, and main storage that is addressable by each of the processors. Each first-level cache is dedicated to a respective one of the processors. Each of the second-level caches is coupled to the other second-level cache, coupled to the main storage, and coupled to predetermined ones of the first-level caches. The range of cacheable addresses for both of the second-level caches encompasses the entire address space of the main storage. Each of the second-level caches may be viewed as dedicated for write access to the set of processors associated with the predetermined set of first-level caches, and shared for read access to the other set of processors. The dedicated and shared nature enhances system efficiency. The cache architecture includes coherency control that filters invalidation traffic between the second-level caches. The filtering of invalidation traffic enhances system efficiency and is accomplished by tracking which second-level cache has the most recent version of the cached data.
摘要:
Each dual multi-processing system has a number of processors, with each processor having a store in first-level write through cache to a second-level cache. A third-level memory is shared by the dual system with the first-level and second-level caches being globally addressable to all of the third-level memory. Processors can write through to the local second-level cache and have access to the remote second-level cache via the local storage controller. A coherency scheme for the dual system provides each second-level cache with indicators for each cache line showing which ones are valid and which ones have been modified or are different than what is reflected in the corresponding third level memory. The flush apparatus uses these two indicators to transfer all cache lines that are within the remote memory address range and have been modified, back to the remote memory prior to dynamically removing the local cache resources due to either system maintenance or dynamic partitioning.
摘要:
Method and apparatus for reducing address/function transfer pins in a system where cache memories in a system controller are accessed by a number of instruction processors. The reduction of pins is obtained by using two data transfers. The increase in data addressing time, which would otherwise occur using two data transfers, is reduced to nearly the time of the data transfers themselves by responding to the first data transfer while the second data transfer is taking place.
摘要:
Method and apparatus for maximizing cache memory throughput in a system where a plurality of requesters may contend for access to a same memory simultaneously. The memory utilizes an interleaved addressing scheme wherein each memory segment is associated with a separate queuing structure and the memory is mapped noncontiguously within the same segment so that all segments are accessed equally. Throughput is maximized as the plurality of requesters are queued evenly throughout the system.
摘要:
Flush apparatus for a dual multi-processing system. Each dual multi-processing system has a number of processors, with each processor having a store in first-level write through cache to a second-level cache. A third-level memory is shared by the dual system with the first-level and second-level caches being globally addressable to all of the third-level memory. Processors can write through to the local second-level cache and have access to the remote second-level cache via the local storage controller. A coherency scheme for the dual system provides each second-level cache with indicators for each cache line showing which ones are valid and which ones have been modified or are different than what is reflected in the corresponding third level memory. The flush apparatus uses these two indicators to transfer all cache lines that are within the remote memory address range and have been modified, back to the remote memory prior to dynamically removing the local cache resources due to either system maintenance or dynamic partitioning. The flush apparatus prevents the loss of system data during such a process due to the inherent nature of a store in second level cache.
摘要:
A system and method is provided to selectively flush data from cache memory to a main memory irrespective of the replacement algorithm that is used to manage the cache data. According to one aspect of the invention, novel “page flush” and “cache line flush” instructions are provided to flush a page and a cache line of memory data, respectively, from a cache to a main memory. In one embodiment, these instructions are included within the hardware instruction set of an Instruction Processor (IP). According to another aspect of the invention, flush operations are initiated using a background interface that interconnects the IP with its associated cache memory. A primary interface that also interconnects the IP to the cache memory is used to simultaneously issue higher-priority requests so that processor throughput is increased.
摘要:
A cache arrangement of a data processing system provides a cache flush operation initiated by a command from a maintenance processor. The cache arrangement includes a cache memory, a mode register, and a controller. The mode register is settable by the maintenance processor to one of first and second values. The controller selectively writes all of the modified information in the cache memory to the system memory responsive to the command. Also in response to this command, all of the information is invalidated in the cache memory if the mode register is set to the second value. In one embodiment, none of the information except the modified data is invalidated if the mode register is set to the first value. The second value may be utilized to efficiently reassign one or more cache memories to a new partition.
摘要:
A system and method for increasing computing throughput through execution of parallel data error detection/correction and cache hit detection operations. In one path, hit detection occurs independent of and concurrent with error detection and correction operations, and reliance on hit detection in this path is based on the absence of storage errors. A single error correction code (ECC) is used to minimize storage requirements, and data hit comparisons based on the cached address and requested address are performed exclusive of ECC bits to minimize bit comparison requirements.
摘要:
Systems and methods are provided for a data processing system and a cache arrangement. The data processing system includes at least one processor, a first-level cache, a second-level cache, and a memory arrangement. The first-level cache bypasses storing data for a memory request when a do-not-cache attribute is associated with the memory request. The second-level cache stores the data for the memory request. The second-level cache also bypasses updating of least-recently-used indicators of the second-level cache when the do-not-cache attribute is associated with the memory request.
摘要:
The current invention provides a system and method for maintaining memory coherency within a multiprocessor environment that includes multiple requesters such as instruction processors coupled to a shared main memory. Within the system of the current invention, data may be provided from the shared memory to a requester for update purposes before all other read-only copies of this data stored elsewhere within the system have been invalidated. To ensure that this acceleration mechanism does not result in memory incoherency, an instruction is provided for inclusion within the instruction set of the processor. Execution of this instruction causes the executing processor to discontinue execution until all outstanding invalidation activities have completed for any data that has been retrieved and updated by the processor.