摘要:
In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
摘要:
In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
摘要:
Data operations, requiring a lock, are batched into a set of operations to be performed on a per-core basis under a single lock. A Most Recently Used (MRU) listing is used to conduct a demotion scan using an MRU flush, a processor identification (ID), and a track change characteristic algorithm.
摘要:
The invention provides a cache device and method for performing a cache process on a cache memory having a high capacity in a high speed. The cache processing section performs a cache process composed of two-stage processes, a query process (P1) and a subsequent process (P2). In the query process (P1), the respective index tables and the identifier table are used to query whether the target identifier is present in the cache memory at a step (S101). If it is present, a data address of the target identifier in the cache memory is transmitted to the CPU. Otherwise, a data address of an identifier for a previously prepared ultimate LRU in the cache memory is transmitted to the CPU at a step (S102). In a subsequent process (P2), adjustment operations for the respective tables, regarding insertion of an identifier for a new data and deletion of the identifier for the ultimate LRU data, are performed at a step (S201).
摘要:
An LRU array and method for tracking the accessing of lines of an associative cache. The most recently accessed lines of the cache are identified in the table, and cache lines can be blocked from being replaced. The LRU array contains a data array having a row of data representing each line of the associative cache, having a common address portion. A first set of data for the cache line identifies the relative age of the cache line for each way with respect to every other way. A second set of data identifies whether a line of one of the ways is not to be replaced. For cache line replacement, the cache controller will select the least recently accessed line using contents of the LRU array, considering the value of the first set of data, as well as the value of the second set of data indicating whether or not a way is locked. Updates to the LRU occur after each pre-fetch or fetch of a line or when it replaces another line in the cache memory.
摘要:
A method, system, and computer program product for supporting multiple fetch requests to the same congruence class in an n-way set associative cache. Responsive to receiving an incoming fetch instruction at a load/store unit, outstanding valid fetch entries in the n-way set associative cache that have the same cache congruence class as the incoming fetch instruction are identified. SetIDs in used by these identified outstanding valid fetch entries are determined. A resulting setID is assigned to the incoming fetch instruction based on the identified setIDs, wherein the resulting setID assigned is a setID not currently in use by the outstanding valid fetch entries. The resulting setID for the incoming fetch instruction is written in a corresponding entry in the n-way set associative cache.
摘要:
An apparatus for encoding/decoding an associative cache set use history, and method therefor, is implemented. A five-bit signal is used to fully encode a four-way cache. A least recently used (LRU) set is encoded using a first bit pair, and a second bit pair encodes a most recently used (MRU) set. The sets having intermediate usage are encoded by a remaining single bit. The single bit has a first predetermined value when the sets having intermediate usage have an in-order relationship in accordance with a predetermined ordering of the cache sets. The single bit has a second predetermined value when the sets having intermediate usage have an out-of-order relationship.
摘要:
A method and apparatus for updating cache memory status bits that depend on match signals of a multi-way set associative cache are disclosed. Faster updating of the status bits is provided by utilizing the fact that there is a match in at most one way of the cache for any read cycle. A first status bit is set to its previous value if no match occurs in any way of the cache. The first status bit is set to a first value if a match has occurred in a first way of the cache and is set to a second value if a match has occurred in a second way of the cache. Fewer transistors are needed to implement the update circuits which, in a preferred embodiment, are realized using complementary metal oxide semiconductor (CMOS) technology. The update circuits may be implemented in an instruction cache translation look aside buffer of a microprocessor for updating least recently used (LRU) array status bits or any cache memory status signals that depend on match signals.
摘要:
A data transfer or replacement system for shifting blocks of data or pages between a high speed, low capacity, working memory and a low speed, high capacity backup store of a data processing system. Each block in the working memory is associated with an "A" and a "B" single bit register. Usage bits are initially inserted into the "A" registers as information from the block is utilized. After one-half of the "A" registers have been identified by associated usage bits, the "B" single bit registers are cleared, and usage bits are inserted into these "B" registers. When one-half of the "B" usage registers are "marked", the "A" registers are cleared and usage bits are then inserted in these "A" registers. Upon the necessity for introduction of additional data from the backup store into the high speed, low capacity working memory, least recently used blocks are identified as those whose associated "A" and "B" registers have not been marked. The new blocks of information are transferred from the backup store into one of the spaces in the high speed store containing such a block of least recently used data.
摘要:
A unit page buffer block includes first to fourth page buffer pairs. Each of the page buffer pairs includes a common column decoder block; and an upper page buffer stage and a lower page buffer stage electrically and commonly connected to the common column decoder block. Each of the upper page buffer stages includes an upper selection block; an upper latch block; and an upper cache block. Each of the lower page buffer stage includes a lower selection block; a lower latch block; and a lower cache block. Each of the upper selection blocks includes first to fourth sub-selection blocks. Each of the upper and lower latch blocks includes first to twelfth upper sub-latch blocks. Each of the upper and lower cache blocks includes first to twelfth upper sub-cache blocks. Each of the common column decoder block includes first to third sub-common column decoder blocks arranged in a row direction.