摘要:
A method and system of implementing a cache coherency mechanism for supporting a non-inclusive cache memory hierarchy within a data processing system is disclosed. In accordance with the method and system of the invention, the memory hierarchy includes a primary cache memory, a secondary cache memory, and a main memory. The primary cache memory and the secondary cache memory are non-inclusive. Further, a first state bit and a second state bit are provided within the primary cache, in association with each cache line of the primary cache. As a preferred embodiment, the first state bit is set only if a corresponding cache line in the primary cache memory has been modified under a write-through mode, while the second state bit is set only if a corresponding cache line also exists in the secondary cache memory. As such, the cache coherency between the primary cache memory and the secondary cache memory can be maintained by utilizing the first state bit and the second state bit in the primary cache memory.
摘要:
A cache has an array for holding data or instruction values, a buffer connected to the array, and means for accessing the buffer to retrieve a value for a processing unit. The accessing means uses wires having a pitch which is substantially equal to a wire pitch of the cache array. Multiplexers can be used with a plurality of such buffers to create a common output path. The cache can be interleaved, with the array being a first subarray, and the buffer being a first buffer, and further comprising a second subarray and a second buffer, wherein the first and second buffers separate the first and second subarrays. The invention can be applied to a store-back buffer as well as a reload buffer.
摘要:
A cache memory having a selectable cache-line replacement scheme is described. In accordance with a preferred embodiment of the present invention, the cache memory has a number of cache lines, a number of token registers, a token, and a selection circuit. The token registers are connected to each other in a ring configuration. There is an equal number of token registers and cache lines, and each of the token registers is associated with one of the cache lines. The token is utilized to indicate one of the cache lines as a candidate for replacement by the associated token register in which the token settles. The selection circuit is associated with all of the token registers. This selection circuit provides at least two methods of controlling the movement of the token within the ring of the token registers, to be selectable during runtime. Each method of token movement represents a cache-line replacement scheme.
摘要:
A cache sub-array arbitration circuit for receiving a plurality of address operands from a pending line of processor instructions in order to pre-fetch data needed in any memory access request in the pending instructions. The sub-array arbitration circuit compares at least two addresses corresponding to memory locations in the cache, and determines in which sub-arrays the memory locations reside. If the two memory locations reside in the same sub-array, the arbitration circuit sends the higher priority address to the sub-array. If a received address operand is the real address of a cache miss, the arbitration circuit sends the cache miss address to the sub-array before other pre-fetch memory access request.
摘要:
A mechanism for cache-line replacement within a cache memory having redundant cache lines is disclosed. In accordance with a preferred embodiment of the present invention, the mechanism comprises a token, a multiple of token registers, multiple allocation-indicating circuits, multiple bypass circuits, and a circuit for replacing a cache line within the cache memory in response to a location of the token. Incidentally, the token is utilized to indicate a candidate cache line for cache-line replacement. The token registers are connected in a ring configuration, and each of the token registers is associated with a cache line of the cache memory, including all redundant cache lines. Normally, one of these token registers contains the token. Each token register has an allocation-indicating circuit. An allocation-indicating circuit is utilized to indicate whether or not an allocation procedure is in progress at the cache line with which the allocation-indicating circuit is associated. Each token register also has a bypass circuit. A bypass circuit is utilized to transfer the token from one token register to an adjacent token circuit in response to an indication from the associated allocation-indicating circuit.
摘要:
A cache memory having a mechanism for managing offset and aliasing conditions is disclosed. In accordance with a preferred embodiment of the invention, the cache memory comprises a first directory circuit, a second directory circuit, a multiple number of most recently used bits, and a multiple number of set/reset circuits. The first directory circuit, having multiple caches lines, is utilized to receive partial effective addresses. The second directory circuit is utilized to receive an output from the first directory circuit. A most recently used bit is associated with each cache line within the first directory circuit. The set/reset circuit, coupled to each of the most recently used bits, is utilized to set one of the most recently used bits to a first state while concurrently resetting the rest of the most recently used bits to a second state within a single cycle during an occurrence of an offset or aliasing conditions such that offset or aliasing conditions can be more efficiently managed.
摘要:
An interleaved cache memory having a single-cycle multi-access capability is disclosed. The interleaved cache memory comprises multiple subarrays of memory cells, an arbitration logic circuit for receiving multiple input addresses to those subarrays, and an address input circuit for applying the multiple input addresses to these subarrays. Each of these subarrays includes an even data section and an odd data section and three content-addressable memories to receive the multiple input addresses for comparison with tags stored in these three content-addressable memories. The first one of the three content-addressable memories is associated with the even data section and the second one of the three content-addressable memories is associated with the odd data section. The arbitration logic circuit is then utilized to select one of the multiple input addresses to proceed if more than one input address attempts to access the same data section of the same subarray.
摘要:
A mechanism for cache-line replacement within a cache memory having redundant cache lines is disclosed. In accordance with a preferred embodiment of the present invention, the mechanism comprises a token, a multiple of token registers, multiple allocation-indicating circuits, multiple bypass circuits, and a circuit for replacing a cache line within the cache memory in response to a location of the token. Incidentally, the token is utilized to indicate a candidate cache line for cache-line replacement. The token registers are connected in a ring configuration, and each of the token registers is associated with a cache line of the cache memory, including all redundant cache lines. Normally, one of these token registers contains the token. Each token register has an allocation-indicating circuit. An allocation-indicating circuit is utilized to indicate whether or not an allocation procedure is in progress at the cache line with which the allocation-indicating circuit is associated. Each token register also has a bypass circuit. A bypass circuit is utilized to transfer the token from one token register to an adjacent token circuit in response to an indication from the associated allocation-indicating circuit.
摘要:
A compare circuit for a content-addressable memory within a computer system is disclosed. In accordance with a preferred embodiment of the present invention, a compare circuit for a content-addressable memory comprises a pair of storage node lines, a pair of compare lines, and two sets of transistors. The pair of storage node lines are complementary to each other and are connected to a memory cell of the content-addressable memory for determining a state of the memory cell. In a like manner, the pair of compare lines are also complementary to each other. The first set of transistors are four transistors connected in series to be enabled by a logical one from one of the storage node lines for allowing a signal from one of the compare lines to propagate to an output. The second set of transistors are also four transistors connected in series to be enabled by a logical zero from the same storage node line for allowing a signal from the other compare line to propagate to the output. Under the present invention, a signal from either compare line needs to propagate through only one level of transistors in order to reach the output.
摘要:
An apparatus for measuring internal temperatures of food patties that includes a drawer sled slidably mounted on a pair of shafts. The drawer sled is formed with a drawer cavity that is sized and shaped to receive a food patty. Moreover, the drawer sled is movable between a loading/unloading position and a temperature measuring position. In the loading/unloading position a food patty can be inserted into or removed from the drawer cavity. Conversely, in the temperature measuring position, one or more temperature probes are inserted radially into the food patty such that one of the temperature probes detects the temperature of the food patty at its geometric center.