Method and apparatus for selective compression of data during initial synchronization of mirrored storage resources

    公开(公告)号:US11347409B1

    公开(公告)日:2022-05-31

    申请号:US17146805

    申请日:2021-01-12

    Abstract: A primary storage system appends a red-hot data indicator to each track of data transmitted on a remote data facility during an initial synchronization state. The red-hot data indicator indicates, on a track-by-track basis, whether the data associated with that track should be stored as compressed or uncompressed data by the backup storage system. The red-hot data indicator may be obtained from the primary storage system's extent-based red-hot data map. If the red-hot data indicator indicates that the track should remain uncompressed, or if the track is locally identified as red-hot data, the backup storage system stores the track as uncompressed data. If the red-hot data indicator indicates that the track should be compressed, the backup storage system compresses the track and stores the track as compressed data. After the initial synchronization process has completed, red-hot data indicators are no longer appended to tracks by the primary storage system.

    Method and Apparatus for Increasing the Accuracy of Predicting Future IO Operations on a Storage System

    公开(公告)号:US20220067549A1

    公开(公告)日:2022-03-03

    申请号:US17010945

    申请日:2020-09-03

    Abstract: A method of increasing the accuracy of predicting future IO operations on a storage system includes creating a snapshot of a production volume, linking the snapshot to a thin device, mounting the thin device in a cloud tethering subsystem, and tagging the thin device to identify the thin device as being used by the cloud tethering subsystem. When data read operations are issued by the cloud tethering subsystem on the tagged thin device, the data read operations are executed by a front-end adapter of the storage system to forward data associated with the data read operations to a cloud repository. The cache manager, however, does not use information about data read operations on tagged thin devices in connection with predicting future IO operations on the cache, so that movement of snapshots to the cloud repository do not skew the algorithms being used by the cache manager to perform cache management.

    Maintaining multiple cache areas
    33.
    发明授权

    公开(公告)号:US10789168B2

    公开(公告)日:2020-09-29

    申请号:US15964264

    申请日:2018-04-27

    Abstract: Maintaining multiple cache areas in a storage device having multiple processors includes loading data from a specific portion of non-volatile storage into a local cache area in response to a specific processor of a first subset of the processors performing a read operation to the specific portion of non-volatile storage, where the local cache area is accessible to the first subset of the processors and is inaccessible to a second subset of the processors that is different than the first subset of the processors and includes loading data from the specific portion of non-volatile storage into a global cache area in response to one of the processors performing a write operation to the specific portion of non-volatile storage, where the global cache area is accessible to the first subset of the processors and to the second subset of the processors. Different processors may be placed on different directors.

    Techniques performed in connection with an insufficient resource level when processing write data

    公开(公告)号:US10776290B1

    公开(公告)日:2020-09-15

    申请号:US16568576

    申请日:2019-09-12

    Abstract: Techniques for processing I/O operations includes: determining whether a current amount of unused physical storage is greater than a threshold; and responsive to determining the current amount of unused physical storage is greater than the threshold, performing normal write processing, and otherwise performing alternative write processing. The alternative write processing includes: initializing a counter; determining whether a physical storage allocation is needed or potentially needed for a write I/O operation; responsive to determining that no physical storage allocation is needed for the write I/O operation, performing the normal write processing. Responsive to determining that a physical storage allocation is needed or potentially needed for the write I/O operation, determining a first amount of one or more credits needed to service the write I/O operation; and responsive to determining the counter does not include at least the first amount of one or more credits, failing the write I/O operation.

    PLACEMENT OF LOCAL CACHE AREAS
    35.
    发明申请

    公开(公告)号:US20190332534A1

    公开(公告)日:2019-10-31

    申请号:US15964290

    申请日:2018-04-27

    Abstract: Maintaining multiple cache areas in a storage device having multiple processors includes loading data into a first local cache in response to a first processor of a first subset of the processors performing a read operation to the specific portion of non-volatile storage, where the first local cache is accessible to the first subset of the processors and is inaccessible to other processors, loading data into a second local cache in response to a second processor of the second subset of the processors performing a read operation to the specific portion of non-volatile storage, where the second local cache is accessible to the second subset of the processors and is inaccessible to other processors, and loading data into a global cache in response to one of the processors performing a write operation to the specific portion of non-volatile storage, where the global cache is accessible to all the processors.

    MAINTAINING MULTIPLE CACHE AREAS
    36.
    发明申请

    公开(公告)号:US20190332533A1

    公开(公告)日:2019-10-31

    申请号:US15964264

    申请日:2018-04-27

    Abstract: Maintaining multiple cache areas in a storage device having multiple processors includes loading data from a specific portion of non-volatile storage into a local cache area in response to a specific processor of a first subset of the processors performing a read operation to the specific portion of non-volatile storage, where the local cache area is accessible to the first subset of the processors and is inaccessible to a second subset of the processors that is different than the first subset of the processors and includes loading data from the specific portion of non-volatile storage into a global cache area in response to one of the processors performing a write operation to the specific portion of non-volatile storage, where the global cache area is accessible to the first subset of the processors and to the second subset of the processors. Different processors may be placed on different directors.

Patent Agency Ranking