Abstract:
A data storage disk cartridge library includes a media drive having a clean internal environment, a disk cartridge bay positioned adjacent to the media drive, and a disk cartridge positioned in the disk cartridge bay and having clean compartments for housing a clean disk trays supporting clean magnetic recording disk media. The media drive has a disk tray extractor including a seal plate positioned at times flush with a surrounding shroud, and a set of pins for extending through the seal plate and a disk tray and for moving to a tray locking position. The seal plate covers a disk tray faceplate to physically isolate the faceplate from the clean portion of the disk tray, the corresponding clean compartment, and the clean environment of the media drive. The shroud is configured to cover surfaces of the disk cartridge adjacent to the faceplate.
Abstract:
A data storage disk cartridge library includes a media drive having a clean internal environment, a disk cartridge bay positioned adjacent to the media drive, and a disk cartridge positioned in the disk cartridge bay and having clean compartments for housing a clean disk trays supporting clean magnetic recording disk media. The media drive has a disk tray extractor including a seal plate positioned at times flush with a surrounding shroud, and a set of pins for extending through the seal plate and a disk tray and for moving to a tray locking position. The seal plate covers a disk tray faceplate to physically isolate the faceplate from the clean portion of the disk tray, the corresponding clean compartment, and the clean environment of the media drive. The shroud is configured to cover surfaces of the disk cartridge adjacent to the faceplate.
Abstract:
A data storage disk cartridge library system includes a rack having an array of bays, at least some housing disk media cartridges and/or media drives, and a pair of horizontal and vertical guide rails bordering each bay. A media transport robot includes fixed-position drive wheels at each corner for driving the robot along the guide rails, and pivoting guide wheels corresponding to each drive wheel for guiding the drive wheel horizontally along a horizontal guide rail and vertically along a vertical guide rail. With each guide wheel coupled with a horizontal guide rail the robot can travel horizontally on the rack, and with each guide wheel coupled with a vertical guide rail the robot can travel vertically on the rack. Electrical power can be supplied to the robot via the guide rails, and gear portions of the wheels mechanically interface with a mechanical portion of the guide rails.
Abstract:
Data is received from a sensing device of a plurality of sensing devices in communication with a device for storage in at least one memory of the device. A first cache memory or a second cache memory of the device is selected for caching the received data based at least in part on the sensing device sending the data. According to another aspect, data is received from a sensing device for storage in at least one memory of a device. It is determined whether to cache the received data based on at least one of the sensing device sending the data and information related to the received data. A cache memory is selected from among a plurality of cache memories of the device for caching the received data based at least in part on the sensing device sending the data.
Abstract:
A disk drive is disclosed that varies its caching policy for caching data in non-volatile solid-state memory as the memory degrades. As the non-volatile memory degrades, the caching policy can be varied such that the non-volatile memory is used more as a read cache and less as a write cache. Performance improvements and slower degradation of the non-volatile memory can thereby be attained.
Abstract:
Embodiments of solid-state storage system are provided herein include data recovery mechanism to recover data upon detection of a read error (e.g., an uncorrectable ECC error) in a storage element such as a page. In various embodiments, the system is configured to determine optimal reference voltage value(s) by evaluating the reference voltage value(s) of page(s) that are related to the page where the failure occurred. The related page(a) may include a page that is paired with the initial page where the failure occurred (e.g., the paired pages reside in a common memory cell), or a neighboring page that is physically near the page where the initial page, and/or a paired page of the neighboring page. In another embodiment, the system is configured to perform a time-limited search function to attempt to determine optimal reference voltage values through an iterative process that adjusts voltage values in a progression to determine a set of values that can retrieve the data.
Abstract:
An individual latency indicator is determined for each Data Storage Device (DSD) or memory portion of a DSD storing one or more erasure coded shards generated from an erasure coding on initial data. Each individual latency indicator is associated with a latency in retrieving an erasure coded shard stored in a respective DSD or memory portion. At least one collective latency indicator is determined using determined individual latency indicators, with the at least one collective latency indicator being associated with a latency in retrieving multiple erasure coded shards. The at least one collective latency indicator is compared to a latency limit, and a subset of erasure coded shards is selected to retrieve based on the comparison of the at least one collective latency indicator to the latency limit.
Abstract:
A Data Storage Device (DSD) is in communication with a plurality of sensing devices. Data is received for storage in the DSD from a sensing device of the plurality of sensing devices. The received data is associated with at least one storage hint assigned to the sensing device. A media region of the DSD is selected from a plurality of media regions for storing the received data based on the at least one storage hint and at least one characteristic of the media region.
Abstract:
Embodiments of compression and formatting of data for data storage systems are disclosed. In some embodiments, a data storage system can compress fixed sized data before storing it on a media and format obtained variable sized compressed data for storing on the media that typically has fixed size storage granularity. One or more modules compress the incoming host data and create an output stream of fixed sized storage units that contain compressed data. The storage units are stored on the media. Capacity, reliability, and performance are thereby increased.
Abstract:
Systems and methods for compression, formatting, and migration of data for data storage systems are disclosed. In some embodiments, data repacking can be used in any situation where embedded metadata needs to be accessed, such as during data migration, and where the underlying data is encrypted. In some embodiments, performance is increased because encrypted data is repacked without first performing decryption. In addition, data may also be compressed and repacking can be performed without performing decompression. Advantageously, there is no need to retrieve or wait for the availability of encryption key (or keys) or expand resources in decrypting (and decompressing) data before repacking it and encrypting repacked data. Available capacity for storing user data, reliability, and performance of the data storage system can be increased.