Abstract:
An invention is provided for protecting the data integrity of a cached storage device in an alternate operating system (OS) environment. The invention includes replacing a globally unique identifiers partition table (GPT) for a cached disk with a modified globally unique identifiers partition table (MGPT). The MGPT renders cached partitions on the cached disk inaccessible when the MGPT is used by an OS to access the cached partitions, while un-cached partitions on the cached disk are still accessible when the using MGPT. In normal operation, the data on the cached disk is accessed using information based on the GPT, which can be stored on a caching disk, generally via caching software. In response to receiving a request to disable caching, the MGPT on the cached disk is replaced with the GPT, thus rendering the all data on the formally cached disk accessible in an alternate OS environment where appropriate caching software is not present.
Abstract:
An electronic system includes: a management server providing a management mechanism with an address structure having a unified address space; a communication block, coupled to the management server, configured to implement a communication transaction based on the management mechanism with the address structure having the unified address space; and a server, coupled to the communication block, providing the communication transaction with a storage device based on the management mechanism with the address structure having the unified address space.
Abstract:
An invention is provided for dynamic cache allocation in a solid state drive environment. The invention includes partitioning a cache memory into a reserved partition and a caching partition, wherein the reserved partition begins at a beginning of the cache memory and the caching partition begins after an end of the reserved partition. Data is cached starting at a beginning of the caching partition. Then, when the caching partition is fully utilized, data is cached the reserved partition. After receiving an indication of a power state change, such as when entering a sleep power state, marking data is written to the reserve partition. The marking data is examined after resuming the normal power state to determine whether a deep sleep power state was entered. When returning from a deep sleep power state, the beginning address of valid cache data within the reserve partition is determined after resuming a normal power state.
Abstract:
An invention is provided for filtering cached input/output (I/O) data. The invention includes receiving a current I/O transfer. Embodiments of the present invention evaluate whether to filter ongoing data streams once the data stream reaches are particular size threshold. The current I/O transfer is part of an ongoing sequential data stream and the total data transferred as part of the ongoing sequential data stream is greater than the predetermined threshold. The transfer rate for the ongoing sequential data stream then is calculated and a determination is made as to whether the transfer rate is greater than a throughput associated with a target storage device. The current I/O transfer is cached when the transfer rate is greater than the throughput associated with a target storage device, or is not cached when the transfer rate is not greater than the throughput associated with a target storage device.
Abstract:
An invention is provided for filtering cached input/output (I/O) data. The invention includes receiving a current I/O transfer. Embodiments of the present invention evaluate whether to filter ongoing data streams once the data stream reaches are particular size threshold. The current I/O transfer is part of an ongoing sequential data stream and the total data transferred as part of the ongoing sequential data stream is greater than the predetermined threshold. The transfer rate for the ongoing sequential data stream then is calculated and a determination is made as to whether the transfer rate is greater than a throughput associated with a target storage device. The current I/O transfer is cached when the transfer rate is greater than the throughput associated with a target storage device, or is not cached when the transfer rate is not greater than the throughput associated with a target storage device.
Abstract:
Provided is a method of writing data of a storage system. The method includes causing a host to issue a first writing command; causing the host, when a queue depth of the first writing command is a first value, to store the first writing command in an entry which is assigned in advance and is included in a cache; causing the host to generate a writing completion signal for the first writing command; and causing the host to issue a second writing command.
Abstract:
According to one embodiment, filtering cached input/output (I/O) data includes receiving a current I/O transfer that is part of an ongoing data stream, and evaluating whether to filter ongoing data streams once the data stream reaches are particular size threshold. The transfer rate for the ongoing data stream may be calculated and a determination is made as to whether the transfer rate is greater than a throughput associated with a target storage device. The current I/O transfer is cached if the transfer rate is greater than the throughput associated with a target storage device, or is not cached if the transfer rate is not greater than the throughput associated with a target storage device. The current I/O transfer may be also cached if the transfer rate is less than or equal to the throughput associated with the target storage device and the I/O transfer is a write I/O transfer.
Abstract:
An invention is provided for handling target disk access requests during disk defragmentation in a solid state drive caching environment. The invention includes detecting a request to access a target storage device. In response, data associated with the request is written to the target storage device without writing the data to the caching device, with the proviso that the request is a write request. In addition, the invention includes reading data associated with the request and marking the data associated with the request stored in the caching device for discard, with the proviso that the request is a read request and the data associated with the request is stored on the caching device. Data marked for discard is discarded from the caching device when time permits, for example, upon completion of disk defragmentation.
Abstract:
A computing system includes: a host processor configured to: determine a compression possibility based on a data type; compress data based on the compression possibility; determine a caching possibility based on the data; execute a batch write request including multiple instances of a write request based on the caching possibility, a store capacity meeting or exceeding a store threshold, or a combination thereof; and a nonvolatile memory, coupled to the host processor, configured to store the data based on the batch write request.
Abstract:
An invention is provided for handling target disk access requests during disk defragmentation in a solid state drive caching environment. The invention includes detecting a request to access a target storage device. In response, data associated with the request is written to the target storage device without writing the data to the caching device, with the proviso that the request is a write request. In addition, the invention includes reading data associated with the request and marking the data associated with the request stored in the caching device for discard, with the proviso that the request is a read request and the data associated with the request is stored on the caching device. Data marked for discard is discarded from the caching device when time permits, for example, upon completion of disk defragmentation.