Abstract:
Examples include techniques for managing high priority (HP) and low priority (LP) write transaction requests by a storage device. An embodiment includes receiving, at a storage controller for a storage device, a write transaction request from a requestor to write data to one or more memory devices in the storage device. When the write transaction request is for a high priority (HP) write, coalescing the write data into a transaction buffer in a memory of the storage device, sending an acknowledgment for the write transaction request to the requestor, and writing the write data into the one or more memory devices. When the write transaction request is for a low priority (LP) write, writing the write data into the one or more memory devices, and then sending an acknowledgment for the write transaction request to the requestor.
Abstract:
A hardware acceleration block is configured to process via a dedicated pair of registers, a plurality of commands of each of a plurality of threads received from a compute complex. The hardware acceleration block receives successive commands that are separated by at least an amount of time, from a thread of the plurality of threads. The amount of time is adequate to process a command from the thread.
Abstract:
Provided are an apparatus, system and method for offloading collision check operations in a memory storage device to a collision check unit. A collision check unit includes a collision table including logical addresses for pending Input/Output (I/O) requests. An I/O request is received to a target logical address addressing a block of data in the non-volatile memory. The logical address is sent to the collision check unit. Resources to transfer data with respect to the transfer buffer to data for the I/O request are allocated in parallel while the collision check unit is determining whether the collision table includes the target logical address. The collision check unit determines whether the collision table includes the target logical address and returns indication of whether the collision table includes the target logical address indicating that current data for the target logical address is already in the transfer buffer.
Abstract:
Read Quality of Service in a solid state drive is improved by allowing a host system communicatively coupled to the solid state drive to control garbage collection in the solid state drive. Through the use of controlled garbage collection, the host system can control when to start and stop garbage collection in the solid state drive and the number of NAND dies engaged in garbage-collection operations.
Abstract:
A machine readable storage medium containing program code that when processed by a processor causes a method to be performed a method is described. The method includes executing a wear leveling routine by servicing cold data from a first queue in a non volatile storage device to write the cold data. The method also includes executing a garbage collection routing by servicing valid data from a second queue in the non volatile storage device to write the valid data. The method also includes servicing host write data from a third queue in the non volatile storage device to write the host write data wherein the first queue remains fixed and is serviced at a constant rate so that a runtime size of the third queue is not substantially affected by the wear leveling routine.
Abstract:
A controller of a solid state drive initiates a repacking of data stored in a non-volatile memory of the solid state drive, wherein refreshing of the data stored in the non-volatile memory of the solid state drive is performed during the repacking of the data stored in the non-volatile memory of the solid state drive. Logical blocks are placed physically contiguously in an increasing order in pre-erased locations of the non-volatile memory of the solid state drive while the data stored in the non-volatile memory of the solid state drive is being repacked.
Abstract:
A controller of a solid state drive initiates a repacking of data stored in a non-volatile memory of the solid state drive, wherein refreshing of the data stored in the non-volatile memory of the solid state drive is performed during the repacking of the data stored in the non-volatile memory of the solid state drive. Logical blocks are placed physically contiguously in an increasing order in pre-erased locations of the non-volatile memory of the solid state drive while the data stored in the non-volatile memory of the solid state drive is being repacked.
Abstract:
Provided are a method and system for allocating read requests in a solid state drive coupled to a host. An arbiter in the solid state drive determines which of a plurality of channels in the solid state drive is a lightly loaded channel of a plurality of channels. Resources for processing one or more read requests intended for the determined lightly loaded channel are allocated, wherein the one or more read requests have been received from the host. The one or more read requests are placed in the determined lightly loaded channel for the processing. In certain embodiments, the lightly loaded channel is the most lightly loaded channel of the plurality of channels.
Abstract:
A storage system includes a NAND storage media and a nonvolatile storage media as a write buffer for the NAND storage media. The write buffer is partitioned, where the partitions are to buffer write data based on a classification of a received write request. Write requests are placed in the write buffer partition with other write requests of the same classification. The partitions have a size at least equal to the size of an erase unit of the NAND storage media. The write buffer flushes a partition once it has an amount of write data equal to the size of the erase unit.
Abstract:
In one embodiment, sequential write stream management is employed to improve the sequential nature of write data placed in a storage such as a solid state drive, notwithstanding intermingling of write commands from various sequential and nonsequential streams from multiple processor nodes in a system. In one embodiment, write data from an identified sequential write stream is placed in a storage area assigned to that particular identified sequential write stream. In another aspect, detected sequential write streams are identified as a function of write velocity of the detected stream. Other aspects are described herein.