摘要:
A method and apparatus for looking up a key associated with a packet to determine a route through a routing device, the method including, upon receipt of a key, forward traversing one or more nodes which make up a trie stored in a memory by evaluating at each node traversed a bit in the key as indicated by a bit-to-test indicator associated with each node. A value of the bit in the key determining the path traversed along the trie. The method includes locating an end node having a route and comparing the route to the key. If they match, destination information associated with the end node is outputted to guide the transfer of the packet through the routing device. If they do not match, the trie is traversed backwards to locate a best match for the key.
摘要:
An apparatus and method for enabling a cache controller and address and data buses of a microprocessor with an on-board cache to provide a SRAM test mode for testing the on-board cache. Upon assertion of a SRAM test signal to a SRAM test pin on the microprocessor chip, the cache and bus controllers cease normal functionality and permit data to be written to, and read from, individual addresses within the on-board cache as though the on-board cache is simple SRAM. After the chip is reset, standard SRAM tests can then be implemented by reading and writing data to selected cache memory addresses as though the cache memory were SRAM. Upon completion of the tests, the SRAM test signal is deasserted and the cache and bus controllers resume normal operating functionality. A reset signal is then applied to the microprocessor to reinitialize control logic employed within the microprocessor. In this way, cache memory on-board a microprocessor can be tested using standard SRAM testing algorithms and equipment thereby eliminating a need for specialized test equipment to test cache memory contained on a microprocessor chip.
摘要:
Two independently accessible subdivided cache tag arrays and a cache control logic is provided to a set associative cache system. Each tag entry is stored in two subdivided cache tag arrays, a physical and a set tag array such that each physical tag array entry has a corresponding set tag array entry. Each physical tag array entry stores the tag addresses and control bits for a set of cache lines. The control bits comprise at least one validity bit indicating whether the data stored in the corresponding cache line is valid. Each set tag array entry stores the descriptive bits for a set of cache lines which consists of the most recently used (MRU) field identifying the most recently used cache lines of the cache set. Each subdivided tag array is provided with its own interface to enable each array to be accessed concurrently but independently by the cache control logic which performs read and write operations against the cache. The cache control logic makes concurrent and independent accesses to the separate tag arrays to read and write the control and descriptive information in the tag entries. The accesses are grouped by type of operation to be performed and each type of accesses is made during predesignated time slots in an optimized manner to enable the cache control logic to perform certain selected read/write accesses to the physical tag array while performing other selected independent read/write accesses to the set tag array concurrently.
摘要:
A caching arrangement which can work efficiently in a superscaler and multiprocessing environment includes separate caches for instructions and data and a single translation lookaside buffer (TLB) shared by them. During each clock cycle, retrievals from both the instruction cache and data cache may be performed, one on the rising edge of the clock cycle and one on the falling edge. The TLB is capable of translating two addresses per clock cycle. Because the TLB is faster than accessing the tag arrays which in turn are faster than addressing the cache data arrays, virtual addresses may be concurrently supplied to all three components and the retrieval made in one phase of a clock cycle. When an instruction retrieval is being performed, snooping for snoop broadcasts may be performed for the data cache and vice versa. Thus, for every clock cycle, an instruction and data cache retrieval may be performed as well as snooping.
摘要:
A cache array, a cache tag and comparator unit and a cache multiplexor are provided to a cache memory. Each cache operation performed against the cache array, read or write, takes only half a clock cycle. The cache tag and comparator unit comprises a cache tag array, a cache miss buffer and control logic. Each cache operation performed against the cache tag array, read or write, also takes only half a clock cycle. The cache miss buffer comprises cache miss descriptive information identifying the current state of a cache fill in progress. The control logic comprises a plurality of combinatorial logics for performing tag match operations. In addition to standard tag match operations, the control logic also conditionally tag matches an accessing address against an address tag stored in the cache miss buffer. Depending on the results of the tag match operations, and further depending on the state of the current cache fill if the accessing address is part of the memory block frame of the current cache fill, the control logic provides appropriate signals to the cache array, the cache multiplexor, the main memory and the instruction/data destination. As a result, subsequent instruction/data requests that are part of a current cache fill in progress can be satisfied without having to wait for the completion of the current cache fill, thereby further reducing cache miss penalties and function unit idle time.
摘要:
An apparatus and method for controlling the initialization of shifting circuitry which provides column redundancy for multiple banks of cache memory on-board a microprocessor. Upon sensing deassertion of a reset signal, a master controller supplies non-overlapping two phase clock signals to one bank controller for each bank of the cache memory. Each bank has a set of fuses which supply a bank shift location to the bank controller indicating the location of a bad column in the bank. The master controller also activates a pre-loadable counter which provides each bank controller with a signal which counts down to zero from half the maximum number of columns in a bank. Each bank controller then provides the shifting signals necessary to initialize the shifting circuitry for its bank. In this way, defective columns located in different positions in each bank can be replaced by redundant paths, thereby repairing the cache and increasing the manufacturing yield of microprocessors with an on-board cache memory.
摘要:
In a sequential logic design having two domains, each having opposing clock edge for its flip-flops, an inter-domain latch is provided for establishing a controllable and observable boundary point for the two domains. The inter-domain latch comprises three multiplexors and three latches. The first multiplexor, the first latch, the second multiplexor, the second latch, the third latch and the third multiplexor are coupled serially. Additionally, the output of the first latch is by-passed to the third multiplexor. The latches either open when the clock pulse is low or when the clock pulse is high. The first and third latches are driven by the same clock pulses, and the second latch is driven by an inverted clock pulse. Scan vectors for the first and second domains are scanned in through the first and second multiplexors respectively. The outputs of the first and second domains are observed at the second latch and the third multiplexor respectively.
摘要:
In a memory system having a main memory and a faster cache memory, a cache memory replacement scheme with a locking feature is provided. Locking bits associated with each line in the cache are supplied in the tag table. These locking bits are preferably set and reset by the application program/process executing and are utilized in conjunction with cache replacement bits by the cache controller to determine the lines in the cache to replace. The lock bits and replacement bits for a cache line are "ORed" to create a composite bit for the cache line. If the composite bit is set the cache line is not removed from the cache. When deadlock due to all composite bits being set will result, all replacement bits are cleared. One cache line is always maintained as non-lockable. The locking bits "lock" the line of data in the cache until such time when the process resets the lock bit. By providing that the process controls the state of the lock bits, the intelligence and knowledge the process contains regarding the frequency of use of certain memory locations can be utilized to provide a more efficient cache.