Host hinting for smart disk allocation to improve sequential access performance

    公开(公告)号:US10929032B1

    公开(公告)日:2021-02-23

    申请号:US15383191

    申请日:2016-12-19

    Abstract: In a computer network in which a data storage array maintains data for at least one host computer, the host computer provides sequential access hints to the storage array. A monitoring program monitors a host application running on the host computer to detect generation of data that is likely to be sequentially accessed by the host application along with associated data. When the host application writes such data to a thinly provisioned logical production volume the monitoring program prompts a multipath IO driver to generate the sequential access hint. In response to the hint the storage array allocates a plurality of sequential storage spaces on a hard disk drive for the data and the associated data. The allocated storage locations on the hard disk drive are written in a spatial sequence that matches the spatial sequence in which the storage locations on the production volume are written.

    Metadata paging mechanism tuned for variable write-endurance flash

    公开(公告)号:US10318180B1

    公开(公告)日:2019-06-11

    申请号:US15384445

    申请日:2016-12-20

    Abstract: A storage array uses both high endurance SSDs and low endurance SSDs for metadata paging. Wear cost values are calculated for each page of metadata in cache. The wear cost values are used to select pages for swapping out of the cache to the SSDs. The wear cost values may be calculated as a function of a first term that is indicative of whether the respective page of metadata will be written to high endurance or low endurance SSDs; a second term that is indicative of likelihood of data associated with the respective pages of metadata that will possibly be changed due to a write; and a third term that is indicative of age of the respective page of metadata in the cache since most recent use. The terms may be estimated and independently weighted. The portion of cache allocated for the metadata may be increased to avoid exceeding DWPD targets.

    Techniques for providing I/O hints using I/O flags

    公开(公告)号:US11409666B2

    公开(公告)日:2022-08-09

    申请号:US16692145

    申请日:2019-11-22

    Abstract: Techniques for processing I/O operations may include: issuing, by a process of an application on a host, an I/O operation; determining, by a driver on the host, that the I/O operation is a read operation directed to a logical device used as a log to log writes performed by the application, wherein the read operation reads first data stored at one or more logical addresses of the logical device; storing, by the driver, an I/O flag in the I/O operation, wherein the I/O flag has a first flag value denoting an expected read frequency associated with the read operation; sending the I/O operation from the host to the data storage system; and performing first processing of the I/O operation on the data storage system, wherein said first processing includes using the first flag value in connection with caching the first data in a cache of the data storage system.

    Local cached data coherency in host devices using remote direct memory access

    公开(公告)号:US11366756B2

    公开(公告)日:2022-06-21

    申请号:US16846485

    申请日:2020-04-13

    Abstract: A first host device establishes connectivity to a logical storage device of a storage system. The first host device obtains from the storage system host connectivity information identifying at least a second host device that has also established connectivity to the logical storage device, caches one or more extents of the logical storage device in a memory of the first host device, and maintains local cache metadata in the first host device regarding the one or more extents of the logical storage device cached in the memory of the first host device. In conjunction with processing of a write operation of the first host device involving at least one of the one or more cached extents of the logical storage device, the first host device invalidates corresponding entries in the local cache metadata of the first host device and in local cache metadata maintained in the second host device.

    MITIGATING IO PROCESSING PERFORMANCE IMPACTS IN AUTOMATED SEAMLESS MIGRATION

    公开(公告)号:US20210357129A1

    公开(公告)日:2021-11-18

    申请号:US15931849

    申请日:2020-05-14

    Abstract: An apparatus comprises a host device configured to communicate over a network with source and target storage systems. The host device, in conjunction with migration of a logical storage device from the source storage system to the target storage system, is further configured to obtain from the target storage system watermark information characterizing progress of the migration of the logical storage device from the source storage system to the target storage system, and to determine whether a given input-output operation is to be sent to the source storage system or the target storage system based at least in part on the watermark information obtained from the target storage system. The watermark information illustratively identifies a particular logical address in the logical storage device, up to and including for which corresponding data has already been copied from the source storage system to the target storage system in conjunction with the migration.

    Efficient cache management
    17.
    发明授权

    公开(公告)号:US11169927B2

    公开(公告)日:2021-11-09

    申请号:US16692386

    申请日:2019-11-22

    Abstract: A distributed cache is managed. In some embodiments, only a subset of a plurality of processing nodes may be designated as cache managers that manage the cache access history of a logical area, including having an exclusive right to control the eviction of data from cache objects of the logical area. In such embodiments, all of the processing nodes may collect cache access information, and communicate the cache access information to the cache managers. Some of the processing nodes that are not cache managers may collect cache access information from a plurality of the other non-cache managers. Each such processing node may combine this communicated cache access information with the cache access information of the processing node itself, sort the combined information per cache manager, and send the resulting sorted cache access information to the respective cache managers. The processing nodes may be arranged in a cache management hierarchy.

    DATA COMPRESSION FOR DIRECTLY CONNECTED HOST

    公开(公告)号:US20210216215A1

    公开(公告)日:2021-07-15

    申请号:US16742955

    申请日:2020-01-15

    Abstract: Data compression is performed on a storage system for which one or more host systems have direct access to data on the storage system. The storage system may compress the data for one or more logical storage units (LSUs) having data stored thereon, and may update compression metadata associated with the LSUs and/or the data portions thereof to reflect that the data is compressed. In response to a read request for a data portion received from a host application executing on the host system, compression metadata for the data portion may be accessed. If it is determined from the compression metadata that the data portion is compressed, the data compression metadata for the data portion may be further analyzed to determine how to decompress the data portion. The data portion may be retrieved and decompressed, and the decompressed data may be returned to the requesting application.

    EFFICIENT CACHE MANAGEMENT
    19.
    发明申请

    公开(公告)号:US20210157740A1

    公开(公告)日:2021-05-27

    申请号:US16692386

    申请日:2019-11-22

    Abstract: A distributed cache is managed. In some embodiments, only a subset of a plurality of processing nodes may be designated as cache managers that manage the cache access history of a logical area, including having an exclusive right to control the eviction of data from cache objects of the logical area. In such embodiments, all of the processing nodes may collect cache access information, and communicate the cache access information to the cache managers. Some of the processing nodes that are not cache managers may collect cache access information from a plurality of the other non-cache managers. Each such processing node may combine this communicated cache access information with the cache access information of the processing node itself, sort the combined information per cache manager, and send the resulting sorted cache access information to the respective cache managers. The processing nodes may be arranged in a cache management hierarchy.

    MANAGING WRITE ACCESS TO DATA STORAGE DEVICES FOR SPONTANEOUS DE-STAGING OF CACHE

    公开(公告)号:US20210034533A1

    公开(公告)日:2021-02-04

    申请号:US16530065

    申请日:2019-08-02

    Abstract: Writes to one or more physical storage devices may be blocked after a certain storage consumption threshold (WBT) for each physical storage device. A WBT for certain designated physical storage devices may be applied in addition to, or as an alternative to, determining and applying a user-defined background task mode threshold (UBTT) for certain designated physical storage devices. In some embodiments, the WBT and UBTT for a physical storage device designated for spontaneous de-staging may be a same threshold value. Write blocking management may include, for each designated physical storage device, blocking any writes to the designated physical storage device after a WBT for the designated physical storage device has been reached, and restoring (e.g., unblocking) writes to the designated physical storage device after storage consumption on the physical storage device has been reduced to a storage consumption threshold (WRT) lower than the WBT.

Patent Agency Ranking