摘要:
A storage server includes an IO controller, a management controller and physical drives. The IO controller generates multiple metadata updates and writes a cache entry that includes the multiple metadata updates to a first cache in memory of the management controller. The IO controller additionally writes a copy of the cache entry to a second cache in a memory of the IO controller and increments a commit pointer in the first and second caches to indicate that the metadata updates are committed.
摘要:
Provided are a computer program product, system, and method for populating a second cache with tracks from a first cache when transferring management of the tracks from a first node to a second node. Management of a first group of tracks in the storage managed by the first node is transferred to the second node managing access to a second group of tracks in the storage. After the transferring the management of the tracks, the second node manages access to the first and second groups of tracks and caches accessed tracks from the first and second groups in the second cache of the second node. The second cache of the second node is populated with the tracks in a first cache of the first node
摘要:
A storage system, maintains a cache and a non-volatile storage. Active tracks in the non-volatile storage are determined. The determined active tracks in the non-volatile storage are validated between the cache and the non-volatile storage during a warmstart recovery.
摘要:
The invention provides an elastic or flexible SSD cache utilizing a hybrid RAID protocol combining RAID-0 protocol for read data and RAID-5 single parity protocol for write data in the same cache array. Read data may be stored in window sized allocations using RAID-0 protocol to avoid allocating an entire RAID stripe for read cache data. In the same SSD volume, dirty write data is stored in row allocations using RAID-5 protocol to provide single parity for the dirty write data. Read data is typically stored a window from the physical device having the largest number of available windows. Write data is stored in a row including the next available window in each arm, which decouples the window structure of the rows from the stripe configuration of the physical memory devices.
摘要:
Examples described herein include a computer system, implemented on a node cluster including at least a first node and a second node. The computer system monitors data access requests received by the first node. Specifically, the computer system monitors data access requests that correspond with operations to be performed on a data volume stored on the second node. The system determines that a number of the data access requests received by the first node satisfies a first threshold amount and, upon making the determination, selectively provisions a cache to store a copy of the data volume on the first node based, at least in part, on a system load of the first node.
摘要:
Modified tracks for write requests to a sequential access storage medium in a sequential access storage device are cached in a non-volatile storage, which is a faster access device than the sequential access storage medium. A request queue includes destage requests to destage the modified tracks in the non-volatile storage device to the sequential access storage medium and read requests to access read requested tracks from the sequential access storage medium. A comparison is made of a current position of a read/write mechanism with respect to physical locations on the sequential access storage medium of the tracks subject to the destage requests indicated in the request queue. A determination is made of one of the destage requests to process based on the comparison. The modified track for the determined destage request is written from the non-volatile storage device to the sequential access storage medium.
摘要:
An apparatus for elastic caching of redundant cache data. The apparatus may have a plurality of buffers and a circuit. The circuit may be configured to (i) receive a write request from a host to store write data in a storage volume, (ii) allocate a number of extents in the buffers based upon a redundant organization associated with the write request and (iii) store the write data in the number of extents, where (a) each of the number of extents is located in a different one of the buffers and (b) the number of extents are dynamically linked together in response to the write request.
摘要:
Techniques are described for processing multi-page write operations to maintain write level consistency. A multi-page write spanning multiple cache pages is directed to a target device and received on a first data storage system where writes to the target device are synchronously replicated to a second data storage system. On the first data storage system, each of the multiple cache pages may be synchronously replicated to the second data storage system. A lock on each of the cache pages is not released until an acknowledgement is received regarding successful replication of the cache page. On the second data storage system, requests to replicate the multiple cache pages containing write data of the multi-page write are received and processed using locks of corresponding cache pages on the second data storage system. Such techniques also handle concurrent reads and/or writes. Deadlock detection and resolution processing may be performed for concurrent writes.
摘要:
When computers and virtual machines operating in the computers both attempt to allocate a cache regarding the data in a secondary storage device to respective primary storage devices, identical data is prevented from being stored independently in multiple computers or virtual machines. An integrated cache management function in the computer arbitrates which computer or virtual machine should cache the data of the secondary storage device, and when the computer or the virtual machine executes input/output of data of the secondary storage device, the computer inquires the integrated cache management function, based on which the integrated cache management function retains the cache only in a single computer, and instructs the other computers to delete the cache. Thus, it is possible to prevent identical data from being cached in a duplicated manner in multiple locations of the primary storage device, and enables efficient capacity usage of the primary storage device.
摘要:
A cache or other cluster is configuration-aware such that initialization and changes to the underlying structure of the cluster can be dynamically updated for use by a client. A client may use a client driver as an intermediary that is responsible for managing the communication with the cluster. For example, a client driver may resolve an alias from a static configuration endpoint to a storage node. The client driver may request an initial configuration from the storage node and then update configuration from one or more storage nodes that store current configuration of the cluster.