-
公开(公告)号:US11327664B1
公开(公告)日:2022-05-10
申请号:US15668797
申请日:2017-08-04
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Jaeyoo Jung , Ramesh Doddaiah , Venkata Khambam , Earl Medeiros , Richard Trabing
IPC: G06F3/06
Abstract: A portion of the shared global memory of a storage array is allocated for write-only blocks. Writes to a same-block of a production device may be accumulated in the allocated portion of memory. Temporal sequencing may be associated with each accumulated version of the same-block. When idle processing resources become available, the oldest group of same-blocks may be consolidated based on the temporal sequencing. The consolidated block may then be destaged to cache slots or managed drives. A group of same-blocks may also be consolidated in response to a read command.
-
公开(公告)号:US11138123B2
公开(公告)日:2021-10-05
申请号:US16589225
申请日:2019-10-01
Applicant: EMC IP Holding Company LLC
Inventor: John Krasner , Ramesh Doddaiah
IPC: G06F3/06 , G06F12/08 , G06F12/0871
Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to: analyze input/output (I/O) operations received by a storage system; dynamically predict anticipated I/O operations of the storage system based on the analysis; and dynamically control a size of a local cache of the storage system based on the anticipated I/O operations.
-
公开(公告)号:US20210149770A1
公开(公告)日:2021-05-20
申请号:US16689133
申请日:2019-11-20
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Ramesh Doddaiah
Abstract: An aperiodic snapshot recommendation engine running in a storage system aperiodically generates hints that a new snapshot should be created. The hints are sent to host servers to prompt snapshot generation commands to be sent to the storage system. The hints may be generated based on current storage system workload conditions using a model of a snapshot scheduler running on a host server for which the storage system maintains data. The model may be created using a machine learning technique. For example, machine learning may be used to model the host's snapshot scheduler in terms of storage system workload conditions existing when the snapshot scheduler commands generation of new snapshots during a training phase.
-
公开(公告)号:US20210034463A1
公开(公告)日:2021-02-04
申请号:US16530682
申请日:2019-08-02
Applicant: EMC IP Holding Company LLC
Inventor: Ramesh Doddaiah , Bernard A. Mulligan, III
Abstract: An apparatus comprises a storage system comprising at least one processing device and a plurality of storage devices. The at least one processing device is configured to obtain a given input-output operation from a host device and to determine that the given input-output operation comprises an indicator having a particular value. The particular value indicates that the given input-output operation is a repeat of a prior input-output operation. The at least one processing device is further configured to rebuild at least one resource of the storage system that is designated for servicing the given input-output operation based at least in part on the determination that the given input-output operation comprises the indicator having the particular value.
-
25.
公开(公告)号:US10853139B2
公开(公告)日:2020-12-01
申请号:US16165523
申请日:2018-10-19
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Sweetesh Singh , Ramesh Doddaiah
Abstract: Allocation of storage array hardware resources between host-visible and host-hidden services is managed to ensure that sufficient hardware resources are allocated to host-visible services. Information obtained from monitoring real-world operation of the storage array is used to generate a model of the storage array. The generated model represents temporal dependencies between storage array hardware, host-visible services, and host-hidden services. Because the model includes information gathered over time and represents temporal dependencies, future occurrence of repeating variations of storage-related service usage and requirements can be predicted. The model may be used to generate hardware recommendations and dynamically re-allocate existing hardware resources to more reliably satisfy a predetermined level of measured performance.
-
公开(公告)号:US10802722B2
公开(公告)日:2020-10-13
申请号:US16529844
申请日:2019-08-02
Applicant: EMC IP Holding Company LLC
Inventor: Jaeyoo Jung , Ramesh Doddaiah , Owen Martin , Arieh Don
IPC: G06F3/06
Abstract: Techniques for processing I/O operations may include: detecting, at a host, a sequence of I/O operations to be sent from the host to a data storage system, wherein each of the I/O operations of the sequence specifies a target address included in a first logical address subrange of a first logical device; sending, from the host, the sequence of I/O operations to a same target port of the data storage system, wherein each of the I/O operations of the sequence includes an indicator denoting whether resources used by the same target port in connection with processing said each I/O operation are to be released subsequent to completing processing of said each I/O operation; receiving the sequence of I/O operations at the same target port of the data storage system; and processing the sequence of I/O operations.
-
公开(公告)号:US20220326865A1
公开(公告)日:2022-10-13
申请号:US17227627
申请日:2021-04-12
Applicant: EMC IP Holding Company LLC
Inventor: Ramesh Doddaiah , Malak Alshawabkeh
IPC: G06F3/06
Abstract: Aspects of the present disclosure relate to data deduplication (dedupe). In embodiments, an input/output operation (IO) stream is received by a storage array. In addition, a received IO sequence in the IO stream that matches a previously received IO sequence is identified. Further, a data deduplication (dedupe) technique is performed based on a selected data dedupe policy. The data dedupe policy can be selected based on a comparison of service quality (QoS) related to the received IO sequence and a QoS related to the previously received IO sequence.
-
公开(公告)号:US11409667B1
公开(公告)日:2022-08-09
申请号:US17231073
申请日:2021-04-15
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Venkata Ippatapu , Ramesh Doddaiah
IPC: G06F12/123 , G06F12/0864 , G06F12/0817 , G06F3/06 , G06F12/0871
Abstract: A deduplication engine maintains a hash table containing hash values of tracks of data stored on managed drives of a storage system. The deduplication engine keeps track of how frequently the tracks are accessed by the deduplication engine using an exponential moving average for each track. Target tracks which are frequently accessed by the deduplication engine are cached in local memory, so that required byte-by-byte comparisons between the target track and write data may be performed locally rather than requiring the target track to be read from managed drives. The deduplication engine implements a Least Recently Used (LRU) cache data structure in local memory to manage locally cached tracks of data. If a track is to be removed from local memory, a final validation of the target track is implemented on the version stored in managed resources before evicting the track from the LRU cache.
-
公开(公告)号:US20220129184A1
公开(公告)日:2022-04-28
申请号:US17079702
申请日:2020-10-26
Applicant: EMC IP Holding Company LLC
Inventor: Ramesh Doddaiah
IPC: G06F3/06
Abstract: Aspects of the present disclosure relate to data deduplication (dedup) techniques for storage arrays. At least one input/output (IO) operations in an IO workload received by a storage array can be identified. Each of the IOs can relate to a data track of the storage array. a probability of the at least one IO being similar to a previous stored IO can be determined. A data deduplication (dedup) operation can be performed on the at least one IO based on the probability. The probability can be less than one hundred percent (100%).
-
公开(公告)号:US20210349756A1
公开(公告)日:2021-11-11
申请号:US17380164
申请日:2021-07-20
Applicant: EMC IP HOLDING COMPANY LLC
Inventor: Ramesh Doddaiah
Abstract: A scheduler for a storage node uses multi-dimensional weighted resource cost matrices to schedule processing of IOs. A separate matrix is created for each computing node of the storage node via machine learning or regression analysis. Each matrix includes distinct dimensions for each emulation of the computing node for which the matrix is created. Each dimension includes modeled costs in terms of amounts of resources of various types required to process an IO of various IO types. An IO received from a host by a computing node is not scheduled for processing by that computing node unless enough resources are available at each emulation of that computing node. If enough resources are unavailable at an emulation, then the IO is forwarded to a different computing node that has enough resources at each of its emulations. A weighted resource cost for processing the IO is calculated and used to determine scheduling priority. The weights or regression coefficients from the model may be used to calculate weighted resource cost.
-
-
-
-
-
-
-
-
-