Method and apparatus for increasing the accuracy of predicting future IO operations on a storage system

    公开(公告)号:US11315028B2

    公开(公告)日:2022-04-26

    申请号:US17010945

    申请日:2020-09-03

    Abstract: A method of increasing the accuracy of predicting future IO operations on a storage system includes creating a snapshot of a production volume, linking the snapshot to a thin device, mounting the thin device in a cloud tethering subsystem, and tagging the thin device to identify the thin device as being used by the cloud tethering subsystem. When data read operations are issued by the cloud tethering subsystem on the tagged thin device, the data read operations are executed by a front-end adapter of the storage system to forward data associated with the data read operations to a cloud repository. The cache manager, however, does not use information about data read operations on tagged thin devices in connection with predicting future IO operations on the cache, so that movement of snapshots to the cloud repository do not skew the algorithms being used by the cache manager to perform cache management.

    Reducing overhead of managing cache areas

    公开(公告)号:US10579529B2

    公开(公告)日:2020-03-03

    申请号:US15964315

    申请日:2018-04-27

    Abstract: Maintaining multiple cache areas in a storage device having multiple processors includes loading data from a specific portion of non-volatile storage into a local cache slot in response to a specific processor of a first subset of the processors performing a read operation to the specific portion of non-volatile storage, where the local cache slot is accessible to the first subset of the processors and is inaccessible to a second subset of the processors that is different than the first subset of the processors and includes converting the local cache slot into a global cache slot in response to one of the processors performing a write operation to the specific portion of non-volatile storage, wherein the global cache area is accessible to the first subset of the processors and to the second subset of the processors. Different ones of the processors may be placed on different directors.

    Destaging multiple cache slots in a single back-end track in a RAID subsystem

    公开(公告)号:US11526447B1

    公开(公告)日:2022-12-13

    申请号:US17363167

    申请日:2021-06-30

    Abstract: A data service layer running on a storage director node generates a request to destage host data from a plurality of cache slots in a single back-end track. The destage request includes pointers to addresses of the cache slots and indicates an order in which the host application data in the cache slots is to be included in the back-end track. A back-end redundant array of independent drives (RAID) subsystem running on a drive adapter is responsive to the request to calculate parity information using the host application data in the cache slots. The back-end RAID subsystem assembles the single back-end track comprising the host application data from the plurality of cache slots of the request, and destages the single back-end track to a non-volatile drive in a single back-end input-output (IO) operation.

    Synchronous Destage of Write Data from Shared Global Memory to Back-end Storage Resources

    公开(公告)号:US20220229589A1

    公开(公告)日:2022-07-21

    申请号:US17151794

    申请日:2021-01-19

    Abstract: A synchronous destage process is used to move data from shared global memory to back-end storage resources. The synchronous destage process is implemented using a client-server model between a data service layer (client) and back-end disk array of a storage system (server). The data service layer initiates a synchronous destage operation by requesting that the back-end disk array move data from one or more slots of global memory to back-end storage resources. The back-end disk array services the request and notifies the data service layer of the status of the destage operation, e.g. a destage success or destage failure. If the destage operation is a success, the data service layer updates metadata to identify the location of the data on back-end storage resources. If the destage operation is not successful, the data service layer re-initiates the destage process by issuing a subsequent destage request to the back-end disk array.

    Group-based RAID-1 implementation in multi-RAID configured storage array

    公开(公告)号:US11372562B1

    公开(公告)日:2022-06-28

    申请号:US17225170

    申请日:2021-04-08

    Abstract: A storage system that supports multiple RAID levels presents storage objects with front-end tracks corresponding to back-end tracks on non-volatile drives and accesses the drives using a single type of back-end allocation unit that is larger than a back-end track. When the number of members of a protection group of a RAID level does not align with the back-end allocation unit, multiple back-end tracks are grouped and accessed using a single IO. The number of back-end tracks in a group is selected to align with the back-end allocation unit size. If the front-end tracks are variable size, then front-end tracks may be destaged into a smaller number of grouped back-end tracks in a single IO.

    Automated recovery from raid double failure

    公开(公告)号:US11321178B1

    公开(公告)日:2022-05-03

    申请号:US17361401

    申请日:2021-06-29

    Abstract: Occurrence of a RAID double failure in a slice of a RAID protection group (failed slice) renders data stored in the back-end tracks of the failed slice vulnerable to loss. When a RAID double failure is detected, a new slice is added to the RAID protection group. Front-end tracks that map to the good back-end tracks of the failed slice are moved from the back-end tracks of the failed slice to the back-end tracks of the newly added slice. Any front-end tracks that mapped to the bad back-end tracks of the failed slice are made to be write pending and written to corresponding back-end tracks of the newly added slice. Front-end tracks that map to the bad back-end tracks may be made to be write-pending in connection with a host write operation, by reading the front-end tracks from a local backup, or from a remote backup location.

    Multi-BCRC raid protection for CKD
    18.
    发明授权

    公开(公告)号:US11256447B1

    公开(公告)日:2022-02-22

    申请号:US17065558

    申请日:2020-10-08

    Abstract: A storage array that presents a logical storage device to hosts that is accessed using front-end tracks and access tangible managed drive using back-end tracks locates multiple front-end tracks in individual back-end tracks. Error-correcting codes are used to identify different front-end tracks in a back-end track when the back-end track is copied from the managed drives into storage array memory. CKD front-end tracks can be split into multiple partial CKD front-end tracks that are located at contiguous address space in different back-end tracks. The front-end tracks that are located in a particular back-end track may be selected to reduce or minimize unused space. The front-end tracks in a back-end track may be logically stored on different production volumes.

    Slice Memory Control
    19.
    发明申请

    公开(公告)号:US20210334026A1

    公开(公告)日:2021-10-28

    申请号:US16859183

    申请日:2020-04-27

    Abstract: Embodiments of the present disclosure relate to managing communications between slices on a storage device engine. Shared slice memory of a storage device engine is provisioned for use by each slice of the storage device engine. The shared slice memory is a portion of total storage device engine memory. Each slice's access to the shared memory portion is controlled.

    Placement of local cache areas
    20.
    发明授权

    公开(公告)号:US10795814B2

    公开(公告)日:2020-10-06

    申请号:US15964290

    申请日:2018-04-27

    Abstract: Maintaining multiple cache areas in a storage device having multiple processors includes loading data into a first local cache in response to a first processor of a first subset of the processors performing a read operation to the specific portion of non-volatile storage, where the first local cache is accessible to the first subset of the processors and is inaccessible to other processors, loading data into a second local cache in response to a second processor of the second subset of the processors performing a read operation to the specific portion of non-volatile storage, where the second local cache is accessible to the second subset of the processors and is inaccessible to other processors, and loading data into a global cache in response to one of the processors performing a write operation to the specific portion of non-volatile storage, where the global cache is accessible to all the processors.

Patent Agency Ranking