Abstract:
A method and computer program product for reclaiming space of a data storage memory of a data storage memory system, and a computer-implemented data storage memory system are provided. The method includes: determining heat metrics of data stored in the data storage memory; determining relocation metrics related to relocation of the data within the data storage memory; determining utility metrics of the data relating the heat metrics to the relocation metrics for the data; and making the data whose utility metric fails a utility metric threshold, available for space reclamation. Thus, data that otherwise may be evicted or demoted, but that meets or exceeds the utility metric threshold, is exempted from space reclamation and is instead maintained in the data storage memory.
Abstract:
An apparatus, system, and method are disclosed for managing eviction of data. A cache write module 712 stores data on a non-volatile storage device 102 sequentially using a log-based storage structure 122 having a head region 128 and a tail region 124. A direct cache module 1016 caches data on the non-volatile storage device 102 using the log-based storage structure 122. The data is associated with storage operations between a host 114 and a backing store storage device 118. An eviction module 1014 evicts data of at least one region in succession from the log-based storage structure 122 starting with the tail region 124 and progressing toward the head region 128.
Abstract:
One or more circuits of a device may comprise a memory. A first portion of a first block of the memory may store program code andlor program data, a second portion of the first block may store an index associated with a second block of the memory, and a third portion of the first block may store an indication of a write status of the first portion. Each bit of the third portion of the first block may indicate whether an attempt to write data to a corresponding one or more words of the first portion of the first block has failed since the last erase of the corresponding one or more words of the first portion of the first block. Whether data to be written to a particular virtual address is written to the first block or the second block may depend on the write status of the blocks.
Abstract:
Techniques and methods are used to reduce allocations to a higher level cache of cache lines displaced from a lower level cache. The allocations of the displaced cache lines are prevented for displaced cache lines that are determined to be redundant in the next level cache, whereby castouts are reduced. To such ends, a line is selected to be displaced in a lower level cache. Information associated with the selected line is identified which indicates that the selected line is present in a higher level cache or the selected line is a write-through line. An allocation of the selected line in the higher level cache is prevented based on the identified information. Preventing an allocation of the selected line saves power that would be associated with the allocation.
Abstract:
A system and method for increasing cache size is provided. Generally, the system contains a storage device having storage blocks therein and a memory. A processor is also provided, which is configured by the memory to perform the steps of: categorizing storage blocks within the storage device as within a first category of storage blocks if the storage blocks that are available to the system for storing data when needed; categorizing storage blocks within the storage device as within a second category of storage blocks if the storage blocks contain application data therein; and categorizing storage blocks within the storage device as within a third category of storage blocks if the storage blocks are storing cached data and are available for storing application data if no first category of storage blocks are available to the system
Abstract:
Embodiments of the invention provide techniques for ensuring that the contents of a non-volatile memory device may be relied upon as accurately reflecting data stored on disk storage across a power transition such as a reboot. For example, some embodiments of the invention provide techniques for determining whether the cache contents and/or or disk contents are modified during a power transition, causing cache contents to no longer accurately reflect data stored in disk storage. Further, some embodiments provide techniques for managing cache metadata during normal ("steady state") operations and across power transitions, ensuring that cache metadata may be efficiently accessed and reliably saved and restored across power transitions.
Abstract:
Embodiments of the invention provide techniques for ensuring that the contents of a non-volatile memory device may be relied upon as accurately reflecting data stored on disk storage across a power transition such as a reboot. For example, some embodiments of the invention provide techniques for determining whether the cache contents and/or or disk contents are modified during a power transition, causing cache contents to no longer accurately reflect data stored in disk storage. Further, some embodiments provide techniques for managing cache metadata during normal ("steady state") operations and across power transitions, ensuring that cache metadata may be efficiently accessed and reliably saved and restored across power transitions.
Abstract:
본 발명은 컴퓨터 저장장치에서의 프리페칭 데이터 관리 방법에 관한 것이다. 본 발명에 따른 컴퓨터 저장장치에서의 프리페칭 데이터 관리 방법은, 전체 캐시를 조각 캐시 단위로 관리하고, 조각 캐시들이 상류와 하류로 분할되며, 상기 상류는 상기 프리페칭된 블록 캐시와 상기 캐싱된 블록 캐시를 가지고, 상기 하류는 상기 캐싱된 블록 캐시만을 가지도록 제어하는 제 1과정, 상기 상류가 가질 수 있는 조각 캐시들의 수(Nu)를 프리페칭 적중률과 캐시 적중률 합의 미분값을 이용하여 갱신하는 제 2과정, 상기 상류에 포함된 조각 캐시의 수가 상기 갱신된 조각 캐시들의 수(Nu)보다 큰 경우, LRU(Least Recently Used) 정책에 따라 상류의 LRU 조각 캐시를 하류로 이동하되, 상기 조각 캐시의 프리페칭된 블록 캐시를 상기 전체 캐시에서 제거시키는 제 3과정을 포함한다.
Abstract:
Techniques for use in CDMA-based products and services, including replacing cache memory allocation so as to maximize residency of a plurality of set ways following a tag-miss allocation. Herein, steps forming a first-in, first-out (FIFO) replacement listing of victim ways for the cache memory, wherein the depth of the FIFO replacement listing approximately equals the number of ways in the cache set. The method and system place a victim way on the FIFO replacement listing only in the event that a tag-miss results in a tag-miss allocation, the victim way is placed at the tail of the FIFO replacement listing after any previously selected victim way. Use of a victim way on the FIFO replacement listing is prevented in the event of an incomplete prior allocation of the victim way by, for example, stalling a reuse request until such initial allocation of the victim way completes or replaying a reuse request until such initial allocation of the victim way completes.