Abstract:
An invention is disclosed for data storage with progressive RAID. A storage request receiver module (1702) receives a request to store data. A striping module (1704) calculates a stripe pattern for the data and each stripe includes N data segments. The striping module (1704) writes the N data segments to N storage devices (150). Each data segment is written to a separate storage device (150) within a set of storage devices (1604) assigned to the stripe. A parity-mirror module (1706) writes a set of N data segments to one or more parity-mirror storage devices (1602) within the set of storage devices. A parity progression module (1708) calculates a parity data segment on each parity-mirror device (1602) in response to a storage consolidation operation, and stores the parity data segments. The storage consolidation operation is conducted to recover storage space and/or data on a parity-mirror storage device (1602).
Abstract:
Techniques for use in CDMA-based products and services, including replacing cache memory allocation so as to maximize residency of a plurality of set ways following a tag-miss allocation. Herein, steps forming a first-in, first-out (FIFO) replacement listing of victim ways for the cache memory, wherein the depth of the FIFO replacement listing approximately equals the number of ways in the cache set. The method and system place a victim way on the FIFO replacement listing only in the event that a tag-miss results in a tag-miss allocation, the victim way is placed at the tail of the FIFO replacement listing after any previously selected victim way. Use of a victim way on the FIFO replacement listing is prevented in the event of an incomplete prior allocation of the victim way by, for example, stalling a reuse request until such initial allocation of the victim way completes or replaying a reuse request until such initial allocation of the victim way completes.
Abstract:
A method, and corresponding software and system, is described for paging memory used for one or more sequentially-accessed data structure. The method includes providing a data structure (200) representing an order in which memory pages are to be reused; and maintaining the data structure according to a history of access to a memory page associated with one of the sequentially-accessed data structures. A position of the memory page in the order depends on a transition of sequential access off of the memory page.
Abstract:
Microprocessor having a power-saving instruction cache way predictor and instruction replacement scheme. In one embodiment, the processor includes a multi-way set associative cache, a way predictor, a policy counter, and a cache refill circuit. The policy counter provides a signal to the way predictor that determines whether the way predictor operates in a first mode or a second mode. Following a cache miss, the cache refill circuit selects a way of the cache and compares a layer number associated with a dataram field of the way to a way-set-layer number. The cache refill circuit writes a block of data to the field if the layer number is not equal to the way-set-layer number. If the layer number is equal to the way-set-layer number, the cache refill circuit repeats the above steps for additional ways until the block of memory is written to the cache.
Abstract:
Disclosed is a technique for managing items in a memory store. A "free-space size threshold" is set for the memory store. An age parameter is also set. When the amount of free space in the store decreases below the threshold, space in the store is freed up by removing memory items. Memory items older than specified by the age parameter are also removed. A "chain" of memory stores can be implemented. When a memory item is removed from the first store, it is added to the second store and so on. The techniques of the present invention can be implemented in each store in the chain, or the stores can use different memory management techniques.
Abstract:
A memory controller comprising a single-port LRU RAM (26) holding replacement way information on replacement of a cache memory (a cache tag RAM (22) and a cache data RAM (23)), an LRU replacement way selecting section (27) for selecting a way to be replaced by an LRU algorithm depending on the replacement way information on the single-port LRU RAM (26), a random replacement way selecting section (105) for selecting a way to be replaced randomly without using the replacement way information, a hit decision section (102) for carrying out hit decision about first and second requests simultaneously made for an access to a cache memory by a CPU (10), and an arbiter section (104) for, if the hit decisions about the first and second requests are both cache misses, allowing the LRU replacement way selecting section (27) to select a way in response to the first request and allowing the random replacement way selecting section (105) to select a way in response to the second request.
Abstract:
Methods of caching data in a computer wherein a cache is given a number of caching parameters. In a method for caching data in a computer having an operating system with a file caching mechanism, the file caching mechanism is selectively disabled and a direct block cache is accessed to satisfy a request of the request stream. Cache memory can be expanded by allocating memory to a memory table created in a user mode portion of the computer and having a set of virtual memory addresses. Methods of caching data can include creating an associative map, and optimizing the order of writes to a disk with a lazy writer. Methods are further assisted by displaying cache performance criteria on a user interface and allowing user adjustment of caching parameters such as cache size, cache block size and lazy writer aggressiveness. A user may further be given the ability to enable or disable a cache for a given selected disk volume.
Abstract:
For an object-oriented database system, an apparatus for virtual memory mapping and transaction management comprises at least one permanent storage and at least one database, at least one cache, and a processing unit including means, utilizing virtual addresses, to access data in the cache, means for mapping virtual to physical addresses, and means for retaining the cached data after a transaction. Data retained across transations will often not need further translation, referred to as forward relocation. Making cached data usable across a sequence of transactions often without requiring further translation, while working size of this data may be larger than a client computer's address space, is referred to as relocation optimization. The method uses a queue containing entities ordered by recency of use, and recycles address space of least-recently used bindings to preserve the validity of bindings necessary for the proper function of the client application with minimal overhead.
Abstract:
Memory management systems and methods that may be employed, for example, to provide efficient management of memory for network systems. The disclosed systems and methods may utilize a multi-layer queue management structure to manage buffer/cache memory in an integrated fashion. The disclosed systems and methods may be implemented as part of an information management system, such as a network processing system that is operable to process information communicated via a network environment, and that may include a network processor operable to process network-communicated information and a memory management system operable to reference the information based upon a connection status associated with the content.
Abstract:
Each time a track is referenced, a value representing the last referenced age is entered for a track entry in a last referenced age table (LRAT). The last referenced age table is indexed by track. A second table, an age frequency table (AFT), counts all segments in use in each reference age. The AFT is indexed by the reference age of the tracks. When a track is referenced, the number of segments used for the track is added to a segment count associated with the last referenced age of the track. The segment count tallies the total number of segments in use for the reference age for all tracks referenced to that age. The number of segments used for the previous last referenced age of the track is subtracted from the segment count associated with the previous last referenced age in the AFT. When free space is needed, tracks are discarded from the LRAT by reference age, the oldest first. The range of ages to be discarded in the LRAT is calculated in the AFT by counting the total amount of segments used by each reference age until the total number of segments needed is realized. Counting is started at the AFT entry with the oldest reference age. The reference age of the last counted entry in the AFT is the discard age. The LRAT is scanned for reference ages between the old age and the discard age, and those reference ages are discarded.