Abstract:
A computing system architecture that facilitates constructing a virtual disk that is customized for an application is described herein. An exemplary computing system having such architecture includes a first plurality of homogeneous storage servers, each storage server in the first plurality of storage servers comprising respective data storage devices of a first type. The exemplary computing system also includes a second plurality of homogeneous storage servers, each storage server in the second plurality of storage servers comprising respective data storage devices of a second type. A virtual disk that is customized for an application is constructed by mapping a linear (virtual) address space to portions of storage devices across the first plurality of storage servers and the second plurality of storage servers. The storage servers are accessible over a full bisection bandwidth network.
Abstract:
Technology is described for a profile-based lifecycle management for data storage servers. The technology can receive a profile, monitor events emitted by devices of the data storage system, determine based on the monitored events that a device of the storage system matches the indicated condition, and perform the action corresponding to the indicated condition, wherein the action includes managing data stored by the data storage system. The received profile can indicate a condition and an action corresponding to the condition.
Abstract:
A method of relating the user logical block address(LBA) of a page of user data to the physical block address (PBA) where the data is stored in a RAIDed architecture reduces to size of the tables by constraining the location to which data of a plurality of LBAs may be written. Chunks of data from a plurality of LBAs may be stored in a common page of memory and the common memory pages is described by a virtual block address (VBA) referencing the PBA, and each of the LBAs uses the same VBA to read the data.
Abstract:
Virtual storage arrays consolidate data storage from branch locations at data centers. The virtual storage array appears to storage clients as a local data storage; however, the virtual storage array data is actually stored at a data center. To overcome the bandwidth and latency limitations of wide area networks between branch locations and the data center, systems and methods predict, prefetch, and cache at the branch location storage blocks that are likely to be requested in the future by storage clients. When this prediction is successful, storage block requests are fulfilled from branch locations' storage block caches. Predictions may leverage an understanding of the semantics and structure of the high-level data structures associated with the storage blocks. Prefetching agents on storage clients monitor storage requests to determine the associations between requested storage blocks and the corresponding high-level data structures as well as other attributes useful for prediction.
Abstract:
A high efficiency portable archive implements a storage system running on a virtualization layer to archive point-in-time versions of a raw data set and the storage system itself as a virtual system on archive media. The high efficiency portable archive can be implemented in a variety of computer architectures. The virtualization layer presents to the storage system a normalized representation of a set of hardware based on components of the computer architecture, shielding the storage system from the actual hardware components of the computer architecture. The storage system and point-in-time versions of the raw data set can be restored to any hardware subsystem that supports the virtual system.
Abstract:
The present invention allows distribution of load generated by a single VOL to multiple processor units, by dividing the VOL into a plurality of smaller fractions called sub-VOL and distributing their ownership to multiple processor units. The division of a VOL is performed by dividing the control information of the VOL for plurality of sub-VOLs and (A) assigning VOL ownership to a processor unit for processing the tasks that are related to complete VOL (e.g. VOL RESERVE command) and (B) assigning ownership of each sub-VOL to different processor units for processing tasks that are specific to that sub-VOL (e.g. Read/Write commands). Thus the load on a singular sub-VOL owner processor unit becomes only a fraction of the total load generated by the VOL. The present invention helps in achieving a relatively even distribution of load among processor units.
Abstract:
A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration.
Abstract:
An apparatus for managing resource reclamation in data storage systems comprises: a volume deletion metadata recorder for recording metadata for one or more deleted volumes; a policy engine responsive to a predetermined policy rule to apply the policy rule to the metadata; and the policy engine initiating policy-controlled data storage space reclamation for the one or more deleted volumes. A volume reclaimer is responsive to the policy engine for reclaiming a data storage space from the one or more deleted volumes; and a resource allocator allocates the data storage space.
Abstract:
A system and method for migrating domains from one physical data processing system to another are provided. With the system and method, domains may be assigned direct access to physical I/O devices but in the case of migration, the I/O devices may be converted to virtual I/O devices without service interruption. At this point, the domain may be migrated without limitation. Upon completion of the migration process, the domain may be converted back to using direct physical access, if available in the new data processing system to which the domain is migrated. Alternatively, the virtualized access to the I/O devices may continue to be used until the domain is migrated back to the original data processing system. Once migration back to the original data processing system is completed, the access may be converted back to direct access with the original physical I/O devices.