TECHNIQUES FOR INCREASED I/O PERFORMANCE
    11.
    发明申请

    公开(公告)号:US20190354286A1

    公开(公告)日:2019-11-21

    申请号:US16529844

    申请日:2019-08-02

    Abstract: Techniques for processing I/O operations may include: detecting, at a host, a sequence of I/O operations to be sent from the host to a data storage system, wherein each of the I/O operations of the sequence specifies a target address included in a first logical address subrange of a first logical device; sending, from the host, the sequence of I/O operations to a same target port of the data storage system, wherein each of the I/O operations of the sequence includes an indicator denoting whether resources used by the same target port in connection with processing said each I/O operation are to be released subsequent to completing processing of said each I/O operation; receiving the sequence of I/O operations at the same target port of the data storage system; and processing the sequence of I/O operations.

    MISALIGNED IO SEQUENCE DATA DEDUPLICATION (DEDUP)

    公开(公告)号:US20220236900A1

    公开(公告)日:2022-07-28

    申请号:US17160526

    申请日:2021-01-28

    Abstract: Aspects of the present disclosure relate to data deduplication (dedup) techniques for storage arrays. In embodiments, a sequence of input/output (IO) operations in an IO stream received from one or more host devices by a storage array are identified. Additionally, a determination can be made as to whether a set of previously received IO operations match the identified IO sequence based on a time series relationship between the identified IO sequence and the previously received IO operations. Further, one or more data deduplication (dedup) techniques can be performed on the matching IO sequence.

    Cache Memory Management
    15.
    发明申请

    公开(公告)号:US20210173782A1

    公开(公告)日:2021-06-10

    申请号:US16709013

    申请日:2019-12-10

    Abstract: Embodiments of the present disclosure relate to cache memory management. Based on anticipated input/output (I/O) workloads, at least one or more of: sizes of one or more mirrored and un-mirrored caches of global memory and their respective cache slot pools are dynamically balanced. Each of the mirrored/unmirrored caches can be segmented into one or more cache pools, each having slots of a distinct size. Cache pool can be assigned an amount of the one or more cache slots of the distinct size based on the anticipated I/O workloads. Cache pools can be further assigned the amount of distinctly sized cache slots based on expected service levels (SLs) of a customer. Cache pools can also be assigned the amount of the distinctly sized cache slots based on one or more of predicted I/O request sizes and predicted frequencies of different I/O request sizes of the anticipated I/O workloads.

    INTELLIGENTLY MANAGING DATA FACILITY CACHES
    16.
    发明申请

    公开(公告)号:US20200320002A1

    公开(公告)日:2020-10-08

    申请号:US16375545

    申请日:2019-04-04

    Abstract: Architectures and techniques are described that can address challenges associated with efficiently managing a cache of a data facility. In that regard, for each block (or other file system structure) of a storage array spanning multiple storage device, relationships can be established between other blocks of the array. The blocks can then be represented as multidimensional vectors, and an aggregation of the vectors can be represented as a weight matrix having values that reflect the corresponding relationships between any two given blocks. In response to any given IO transaction, a corresponding vector can be selected that is representative of a block referenced by the IO transaction and one or more target blocks having a high relationship value to the block can be identified and used in connection with a cache update procedure.

    Data deduplication (dedup) management

    公开(公告)号:US11698744B2

    公开(公告)日:2023-07-11

    申请号:US17079702

    申请日:2020-10-26

    Inventor: Ramesh Doddaiah

    CPC classification number: G06F3/0652 G06F3/061 G06F3/067

    Abstract: Aspects of the present disclosure relate to data deduplication (dedup) techniques for storage arrays. At least one input/output (IO) operations in an IO workload received by a storage array can be identified. Each of the IOs can relate to a data track of the storage array. a probability of the at least one IO being similar to a previous stored IO can be determined. A data deduplication (dedup) operation can be performed on the at least one IO based on the probability. The probability can be less than one hundred percent (100%).

    DATA DEDUPLICATION LATENCY REDUCTION

    公开(公告)号:US20230027284A1

    公开(公告)日:2023-01-26

    申请号:US17382447

    申请日:2021-07-22

    Abstract: Aspects of the present disclosure relate to reducing the latency of data deduplication. In embodiments, an input/output (IO) workload received by a storage array is monitored. Further, at least one IO write operation in the IO workload is identified. A space-efficient probabilistic data structure is used to determine if a director board is associated with the IO write. Additionally, the IO write operation is processed based on the determination.

    Weighted resource cost matrix scheduler

    公开(公告)号:US11513849B2

    公开(公告)日:2022-11-29

    申请号:US17380164

    申请日:2021-07-20

    Inventor: Ramesh Doddaiah

    Abstract: A scheduler for a storage node uses multi-dimensional weighted resource cost matrices to schedule processing of IOs. A separate matrix is created for each computing node of the storage node via machine learning or regression analysis. Each matrix includes distinct dimensions for each emulation of the computing node for which the matrix is created. Each dimension includes modeled costs in terms of amounts of resources of various types required to process an IO of various IO types. An IO received from a host by a computing node is not scheduled for processing by that computing node unless enough resources are available at each emulation of that computing node. If enough resources are unavailable at an emulation, then the IO is forwarded to a different computing node that has enough resources at each of its emulations. A weighted resource cost for processing the IO is calculated and used to determine scheduling priority. The weights or regression coefficients from the model may be used to calculate weighted resource cost.

    Adjusting host quality of service metrics based on storage system performance

    公开(公告)号:US11494283B2

    公开(公告)日:2022-11-08

    申请号:US16865458

    申请日:2020-05-04

    Inventor: Ramesh Doddaiah

    Abstract: A storage system has a QOS recommendation engine that monitors storage system operational parameters and generates recommended changes to host QOS metrics (throughput, bandwidth, and response time requirements) based on differences between the host QOS metrics and storage system operational parameters. The recommended host QOS metrics may be automatically implemented to adjust the host QOS metrics. By reducing host QOS metrics during times where the storage system is experiencing high volumes of workload, it is possible to throttle workload at the host computer rather than requiring the storage system to expend processing resources associated with queueing the workload prior to processing. This can enable the overall throughput of the storage system to increase. When the workload on the storage system is reduced, updated recommended host QOS metrics are provided to enable the host QOS metrics to increase. Historical analysis is also used to generate recommended host QOS metrics.

Patent Agency Ranking