-
11.
公开(公告)号:US11315028B2
公开(公告)日:2022-04-26
申请号:US17010945
申请日:2020-09-03
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Deepak Vokaliga , Rong Yu
Abstract: A method of increasing the accuracy of predicting future IO operations on a storage system includes creating a snapshot of a production volume, linking the snapshot to a thin device, mounting the thin device in a cloud tethering subsystem, and tagging the thin device to identify the thin device as being used by the cloud tethering subsystem. When data read operations are issued by the cloud tethering subsystem on the tagged thin device, the data read operations are executed by a front-end adapter of the storage system to forward data associated with the data read operations to a cloud repository. The cache manager, however, does not use information about data read operations on tagged thin devices in connection with predicting future IO operations on the cache, so that movement of snapshots to the cloud repository do not skew the algorithms being used by the cache manager to perform cache management.
-
公开(公告)号:US10579529B2
公开(公告)日:2020-03-03
申请号:US15964315
申请日:2018-04-27
Applicant: EMC IP Holding Company LLC
Inventor: Venkata Khambam , Jeffrey R. Nelson , Brian Asselin , Rong Yu
IPC: G06F12/00 , G06F12/084 , G06F12/0842 , G06F12/0815
Abstract: Maintaining multiple cache areas in a storage device having multiple processors includes loading data from a specific portion of non-volatile storage into a local cache slot in response to a specific processor of a first subset of the processors performing a read operation to the specific portion of non-volatile storage, where the local cache slot is accessible to the first subset of the processors and is inaccessible to a second subset of the processors that is different than the first subset of the processors and includes converting the local cache slot into a global cache slot in response to one of the processors performing a write operation to the specific portion of non-volatile storage, wherein the global cache area is accessible to the first subset of the processors and to the second subset of the processors. Different ones of the processors may be placed on different directors.
-
公开(公告)号:US09830266B1
公开(公告)日:2017-11-28
申请号:US14156678
申请日:2014-01-16
Applicant: EMC IP Holding Company LLC
Inventor: Rong Yu , Orit Levin-Michael , John W. Lefferts , Pei-Ching Hwang , Peng Yin , Yechiel Yochai , Dan Aharoni , Qun Fan , Stephen Richard Ives
IPC: G06F12/00 , G06F12/0862
CPC classification number: G06F12/0862 , G06F2212/6024
Abstract: Described are techniques for processing a data operation in a data storage system. A front-end component of the data storage system receives the data operation. In response to receiving the data operation, the front-end component performs first processing. The first processing includes determining whether the data operation is a read operation requesting to read a data portion which results in a cache miss; and if said determining determines that the data operation is a read operation resulting in a cache miss, performing read miss processing. Read miss processing includes sequential stream recognition processing performed by the front-end component to determine whether the data portion is included in a sequential stream.
-
公开(公告)号:US11526447B1
公开(公告)日:2022-12-13
申请号:US17363167
申请日:2021-06-30
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Peng Wu , Rong Yu , Jiahui Wang , Lixin Pang
IPC: G06F12/0844 , G06F3/06
Abstract: A data service layer running on a storage director node generates a request to destage host data from a plurality of cache slots in a single back-end track. The destage request includes pointers to addresses of the cache slots and indicates an order in which the host application data in the cache slots is to be included in the back-end track. A back-end redundant array of independent drives (RAID) subsystem running on a drive adapter is responsive to the request to calculate parity information using the host application data in the cache slots. The back-end RAID subsystem assembles the single back-end track comprising the host application data from the plurality of cache slots of the request, and destages the single back-end track to a non-volatile drive in a single back-end input-output (IO) operation.
-
15.
公开(公告)号:US20220229589A1
公开(公告)日:2022-07-21
申请号:US17151794
申请日:2021-01-19
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Lixin Pang , Rong Yu , Peng Wu , Shao Hu , Mohammed Asher VT
IPC: G06F3/06
Abstract: A synchronous destage process is used to move data from shared global memory to back-end storage resources. The synchronous destage process is implemented using a client-server model between a data service layer (client) and back-end disk array of a storage system (server). The data service layer initiates a synchronous destage operation by requesting that the back-end disk array move data from one or more slots of global memory to back-end storage resources. The back-end disk array services the request and notifies the data service layer of the status of the destage operation, e.g. a destage success or destage failure. If the destage operation is a success, the data service layer updates metadata to identify the location of the data on back-end storage resources. If the destage operation is not successful, the data service layer re-initiates the destage process by issuing a subsequent destage request to the back-end disk array.
-
公开(公告)号:US11372562B1
公开(公告)日:2022-06-28
申请号:US17225170
申请日:2021-04-08
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Peng Wu , Rong Yu , Jiahui Wang , Lixin Pang
IPC: G06F3/06
Abstract: A storage system that supports multiple RAID levels presents storage objects with front-end tracks corresponding to back-end tracks on non-volatile drives and accesses the drives using a single type of back-end allocation unit that is larger than a back-end track. When the number of members of a protection group of a RAID level does not align with the back-end allocation unit, multiple back-end tracks are grouped and accessed using a single IO. The number of back-end tracks in a group is selected to align with the back-end allocation unit size. If the front-end tracks are variable size, then front-end tracks may be destaged into a smaller number of grouped back-end tracks in a single IO.
-
公开(公告)号:US11321178B1
公开(公告)日:2022-05-03
申请号:US17361401
申请日:2021-06-29
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Rong Yu , Peng Wu , Shao Hu , Lixin Pang
Abstract: Occurrence of a RAID double failure in a slice of a RAID protection group (failed slice) renders data stored in the back-end tracks of the failed slice vulnerable to loss. When a RAID double failure is detected, a new slice is added to the RAID protection group. Front-end tracks that map to the good back-end tracks of the failed slice are moved from the back-end tracks of the failed slice to the back-end tracks of the newly added slice. Any front-end tracks that mapped to the bad back-end tracks of the failed slice are made to be write pending and written to corresponding back-end tracks of the newly added slice. Front-end tracks that map to the bad back-end tracks may be made to be write-pending in connection with a host write operation, by reading the front-end tracks from a local backup, or from a remote backup location.
-
公开(公告)号:US11256447B1
公开(公告)日:2022-02-22
申请号:US17065558
申请日:2020-10-08
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Lixin Pang , Jiahui Wang , Peng Wu , Rong Yu
IPC: G06F11/10 , G06F11/20 , G06F3/06 , H04L67/1097
Abstract: A storage array that presents a logical storage device to hosts that is accessed using front-end tracks and access tangible managed drive using back-end tracks locates multiple front-end tracks in individual back-end tracks. Error-correcting codes are used to identify different front-end tracks in a back-end track when the back-end track is copied from the managed drives into storage array memory. CKD front-end tracks can be split into multiple partial CKD front-end tracks that are located at contiguous address space in different back-end tracks. The front-end tracks that are located in a particular back-end track may be selected to reduce or minimize unused space. The front-end tracks in a back-end track may be logically stored on different production volumes.
-
公开(公告)号:US20210334026A1
公开(公告)日:2021-10-28
申请号:US16859183
申请日:2020-04-27
Applicant: EMC IP Holding Company LLC
Inventor: Rong Yu , Jingtong Liu , Peng Wu
IPC: G06F3/06
Abstract: Embodiments of the present disclosure relate to managing communications between slices on a storage device engine. Shared slice memory of a storage device engine is provisioned for use by each slice of the storage device engine. The shared slice memory is a portion of total storage device engine memory. Each slice's access to the shared memory portion is controlled.
-
公开(公告)号:US10795814B2
公开(公告)日:2020-10-06
申请号:US15964290
申请日:2018-04-27
Applicant: EMC IP Holding Company LLC
Inventor: Jeffrey R. Nelson , Michael J. Scharland , Rong Yu
IPC: G06F12/08 , G06F12/0811 , G06F12/0808
Abstract: Maintaining multiple cache areas in a storage device having multiple processors includes loading data into a first local cache in response to a first processor of a first subset of the processors performing a read operation to the specific portion of non-volatile storage, where the first local cache is accessible to the first subset of the processors and is inaccessible to other processors, loading data into a second local cache in response to a second processor of the second subset of the processors performing a read operation to the specific portion of non-volatile storage, where the second local cache is accessible to the second subset of the processors and is inaccessible to other processors, and loading data into a global cache in response to one of the processors performing a write operation to the specific portion of non-volatile storage, where the global cache is accessible to all the processors.
-
-
-
-
-
-
-
-
-