Abstract:
In one embodiment, a node coupled to a plurality of storage devices executes a storage input/output (I/O) stack having a plurality of layers including a persistence layer. A portion of non-volatile random access memory (NVRAM) is configured as one or more logs. The persistence layer cooperates with the NVRAM to employ the log to record write requests received from a host and to acknowledge successful receipt of the write requests to the host. The log has a set of entries, each entry including (i) write data of a write request and (ii) a previous offset referencing a previous entry of the log. After a power loss, the acknowledged write requests are recovered by replay of the log in reverse sequential order using the previous record offset in each entry to traverse the log.
Abstract:
The techniques introduced here provide for efficient management of storage resources in a modern, dynamic data center through the use of virtual storage appliances. Virtual storage appliances perform storage operations and execute in or as a virtual machine on a hypervisor. A storage management system monitors a storage system to determine whether the storage system is satisfying a service level objective for an application. The storage management system then manages (e.g., instantiates, shuts down, or reconfigures) a virtual storage appliance on a physical server. The virtual storage appliance uses resources of the physical server to meet the storage related needs of the application that the storage system cannot provide. This automatic and dynamic management of virtual storage appliances by the storage management system allows storage systems to quickly react to changing storage needs of applications without requiring expensive excess storage capacity.
Abstract:
Examples described herein include a system for storing data. The data storage system retrieves a first set of metadata associated with data stored on a first cache memory, and stores the first set of metadata on a primary storage device. The primary storage device is a backing store for the data stored on the first cache memory. The storage system selectively copies data form the primary storage device to a second cache memory based, at least in part, on the first set of metadata stored on the primary storage device. For some aspects, the storage system may copy the data from the primary storage device to the second cache memory upon determining that the first cache memory is in a failover state.
Abstract:
Various systems and methods are described for configuring a data storage system. In one embodiment, a plurality of actual capacities of a plurality of storage devices of the data storage system are identified and divided into a plurality of capacity slices. The plurality of capacity slices are combined into a plurality of chunks of capacity slices, each having a combination of characteristics of the underlying physical storage devices. The chunks of capacity slices are then mapped to a plurality of logical storage devices. A group of the plurality of logical storage devices is then organized into a redundant array of logical storage devices.
Abstract:
One or more techniques and/or systems are provided for managing one or more worker threads. For example, a utility list queue may be populated with a set of work item entries for execution. A set of worker threads may be initialized to execute work item entries within the utility list queue. In an example, a worker thread may be instructed to operate in a decentralized manner, such as without guidance from a timer manager thread. The worker thread may be instructed to execute work item entries that are not assigned to other worker threads and that are expired (e.g., ready for execution). The worker thread may transition into a sleep state if the utility list queue does not comprise at least one work item entry that is unassigned and expired.
Abstract:
An indication of an event is received at a storage controller. The indication of the event corresponds to a first severity. It is determined that the event is associated with a first stream of commands. It is determined whether the indication of the event is the first indication of the event received by the storage controller. If the indication of the event is the first indication of the event received by the storage controller, a maximum allowed count of in-flight commands to be less than a current count of in-flight commands is set. If the indication of the event is not the first indication of the event received by the storage controller, it is determined that the first severity is greater than a second severity corresponding to a previously received indication. If the first severity is greater than the second severity, the maximum allowed count of in-flight commands is decreased.
Abstract:
An indication of an event occurrence is received. The indication of the event occurrence is associated with a severity. A tag associated with the indication of the event occurrence is determined. It is determined whether the tag is the same as a preceding tag. In response to a determination that the tag is not the same as the preceding tag, a component is notified of the event occurrence, the tag is stored for later use, and an indication of the severity associated with the indication of the event occurrence is stored.
Abstract:
In one embodiment, a node of a cluster executing a storage input/output (I/O) stack having a volume layer, stores a multi-level dense tree metadata structure. Each level of the dense tree metadata structure includes volume metadata entries for storing volume metadata. One or more non-volatile logs (NVLogs) are updated. The one or more NVLogs including a volume layer log configured to record changes to the volume metadata, wherein volume metadata entries inserted into a top-level of the dense tree metadata structure are recorded in the volume layer log. The node writes volume metadata entries from the volume layer log to one or more storage devices to be stored as extents.
Abstract:
Example embodiments provide various techniques for locating cryptographic keys stored in a cache. The cryptographic keys are temporarily stored in the cache until retrieved for use in a cryptographic operation. The cryptographic key may be located or found through reference to its cryptographic key identifier. In an example, a particular cryptographic key may be needed for a cryptographic operation. The cache is first searched to locate this cryptographic key. To locate the cryptographic key, the cryptographic key identifier that is associated with this cryptographic key is provided. In turn, the cryptographic key identifier may be used as an address into the cache. The address identifies a location of the cryptographic key within the cache. The cryptographic key may then be retrieved from the cache at the identified address and then used in the cryptographic operation.
Abstract:
A storage device failure in a computer storage system can be analyzed by the storage system by examining relevant information about the storage device and its environment. Information about the storage device is collected in real-time and stored; this is an on-going process such that some information is continuously available. The information can include information relating to the storage device, such as input/output related information, and information relating to a storage shelf where the storage device is located, such as a status of adjacent storage devices on the shelf. All of the relevant information is analyzed to determine a reason for the storage device failure. Optionally, additional information may be collected and analyzed by the storage system to help determine the reason for the storage device failure. The analysis and supporting information can be stored in a log and/or presented to a storage system administrator to view.