摘要:
A processor cache memory system utilizes separate cache controllers for independently managing even and odd input address requests with the even and odd address requests being mapped into the respective controllers. Each cache controller includes tag RAM for storing address tags, including a field for storing the least significant address bit, so that the stored tags distinguish between the odd and even addresses. Upon failure of a cache controller, both the even and odd addresses are directed to the operational controller and the stored least significant bit address tag distinguishes between the odd and even input addresses to appropriately generate HIT/MISS signals. The controllers include block address counter logic for generating respective even and odd invalidation addresses for simultaneously performing invalidation cycles thereon when both controllers are operational. When a controller fails, the block address counter logic generates both even and odd block invalidation addresses in the operational controller.
摘要:
A method and implementation is supplied for the synchronous loading and integrity checking of registers located in two different integrated circuit chips. Thus in a computer system having cache memory where the cache memory is sliced into two portions, one of which holds even addresses and the other of which holds odd addresses, there is provided two individual chips each of which has a program word address register which is loaded at the exact same period of time and which is additionally incremented in both cases at the exact same period of time. Further means are provided for checking the integrity of the program word address registers in the first slice and the second slice of the cache in order to insure that they are coherent, or if not coherent, then a disable signal will prevent usage of the address data involved.
摘要:
In a time-shared bus computer system with processors having cache memories, an adjustable invalidation queue for use in the cache memories. The invalidation queue has adjustable upper and lower limit positions that define when the queue is logically full and logically empty, respectively. The queue is flushed down to the lower limit when the contents of the queue attain the upper limit. During the queue flushing operation, WRITE requests on the bus are RETRYed. The computer maintenance system sets the upper and lower limits at system initialization time to optimize system performance under maximum bus traffic conditions.
摘要:
A "bit-sliced" construction cache module dictates dual TAG RAM Structures and dual invalidation queues, yielding enhanced performance: putting half the TAG array in each of two cache arrays, and allowing each to handle only one-half of the possible address values. Preferably, one half-module handles ZERO least-significant bits and the other handles ONE least-significant bits. Processor operations and invalidation operations can be "overlapped", and even operate simultaneously.
摘要:
A system and circuitry is provided by which certain selected embedded pins of an integrated circuit gate array may be provided with dual functions, that is to say, they may act either as receivers of an externally sourced input signal or as transmitters of an internally generated output signal. Each selected input/output pin is controlled by an associated flip-flop residing in a chain of flip-flops so that an associated flip-flop will determine the condition of two buffer-drivers attached to each input/output pin. While the first buffer-driver is tri-stated (disabled), then the embedded pin operates as an input receiving function. When the first buffer-driver is enabled, the embedded I/O pin operates as the conveyer of an output signal from the internal output logic.
摘要:
An Invalidation Queue (IQ) arrangement in a computer system having Main Memory and, Cache-Memory, with a pair of intermediate main-buses this IQ arrangement comprising: a pair of split IQ ASIC Arrays disposed between each Cache-Memory and the main-buses and being adapted to assure identical data in all identical memory addresses in different caches, and to "remember" write-operations along the buses and to execute invalidation sequences for any Cache-Memory unit as dictated by that Cache-Memory unit.
摘要:
By expanding the cache address invalidation queue into bit slices for holding odd invalidation addresses and even invalidation addresses and also by providing a more efficient series of transition cycles to accomplish cache address invalidations both during a cache hit or a cache miss cycle, the present architecture and methodology permits a faster cycle of cache address invalidations when required and also permits a higher frequency of processor access to cache without the processor being completely locked out from cache memory access during heavy traffic and high level of cache invalidation conditions.