Balanced, Opportunistic Multicore I/O Scheduling From Non-SMP Applications

    公开(公告)号:US20180113738A1

    公开(公告)日:2018-04-26

    申请号:US15497744

    申请日:2017-04-26

    Applicant: NETAPP, INC.

    CPC classification number: G06F9/5027

    Abstract: A system for dynamically configuring and scheduling input/output (I/O) workloads among processing cores is disclosed. Resources for an application that are related to each other and/or not multicore safe are grouped together into work nodes. When these need to be executed, the work nodes are added to a global queue that is accessible by all of the processing cores. Any processing core that becomes available can pull and process the next available work node through to completion, so that the work associated with that work node software object is all completed by the same core, without requiring additional protections for resources that are not multicore safe. Indexes track the location of both the next work node in the global queue for processing and the next location in the global queue for new work nodes to be added for subsequent processing.

    Balanced, opportunistic multicore I/O scheduling from non-SMP applications

    公开(公告)号:US10826848B2

    公开(公告)日:2020-11-03

    申请号:US15497744

    申请日:2017-04-26

    Applicant: NETAPP, INC.

    Abstract: A system for dynamically configuring and scheduling input/output (I/O) workloads among processing cores is disclosed. Resources for an application that are related to each other and/or not multicore safe are grouped together into work nodes. When these need to be executed, the work nodes are added to a global queue that is accessible by all of the processing cores. Any processing core that becomes available can pull and process the next available work node through to completion, so that the work associated with that work node software object is all completed by the same core, without requiring additional protections for resources that are not multicore safe. Indexes track the location of both the next work node in the global queue for processing and the next location in the global queue for new work nodes to be added for subsequent processing.

    Managing input/output operations for shingled magnetic recording in a storage system

    公开(公告)号:US10521345B2

    公开(公告)日:2019-12-31

    申请号:US16113719

    申请日:2018-08-27

    Applicant: NetApp, Inc.

    Abstract: A system and method for improving the management of data input and output (I/O) operations for Shingled Magnetic Recording (SMR) devices in a network storage system is disclosed. The storage system includes a storage controller that receives a series of write requests for data blocks to be written to non-sequential addresses within a pool of SMR devices. The storage controller writes the data blocks from the series of write requests to a corresponding sequence of data clusters allocated within a first data cache of the storage controller for a thinly provisioned volume of the pool of SMR devices. Upon determining that a current utilization of the first data cache's data storage capacity exceeds a threshold, the sequence of data clusters including the data blocks from the first data cache are transferred to sequential physical addresses within the SMR devices.

    Cache flushing and interrupted write handling in storage systems

    公开(公告)号:US10235288B2

    公开(公告)日:2019-03-19

    申请号:US14874157

    申请日:2015-10-02

    Applicant: NetApp, Inc.

    Abstract: Systems and techniques for cache management are disclosed that provide improved cache performance by prioritizing particular storage stripes for cache flush operations. The systems and techniques may also leverage features of the storage devices to provide atomicity without the overhead of inter-controller mirroring. In some embodiments, the systems and techniques include a storage controller that stores data in a cache. The data is associated with one or more sectors of a storage stripe that is defined over plurality of storage devices. The storage controller identifies a locality of dirty sectors of the one or more sectors, classifies the storage stripe into a category based on the locality, provides a category ordering of the category relative to at least one other category, and flushes the storage stripe from the cache to the plurality of storage devices according to the category ordering.

    Cache Flushing and Interrupted Write Handling in Storage Systems

    公开(公告)号:US20170097886A1

    公开(公告)日:2017-04-06

    申请号:US14874157

    申请日:2015-10-02

    Applicant: NetApp, Inc.

    Abstract: Systems and techniques for cache management are disclosed that provide improved cache performance by prioritizing particular storage stripes for cache flush operations. The systems and techniques may also leverage features of the storage devices to provide atomicity without the overhead of inter-controller mirroring. In some embodiments, the systems and techniques include a storage controller that stores data in a cache. The data is associated with one or more sectors of a storage stripe that is defined over plurality of storage devices. The storage controller identifies a locality of dirty sectors of the one or more sectors, classifies the storage stripe into a category based on the locality, provides a category ordering of the category relative to at least one other category, and flushes the storage stripe from the cache to the plurality of storage devices according to the category ordering.

Patent Agency Ranking