-
公开(公告)号:US20180365145A1
公开(公告)日:2018-12-20
申请号:US16113719
申请日:2018-08-27
Applicant: NetApp, Inc.
Inventor: Mahmoud K. Jibbe , Keith Holt , Scott Terrill
IPC: G06F12/0802 , G06F12/10 , G11B20/10 , G11B5/012 , G06F3/06
CPC classification number: G06F12/0802 , G06F3/067 , G06F12/10 , G06F2212/1016 , G06F2212/202 , G06F2212/604 , G11B5/012 , G11B20/10527 , G11B2020/10657 , G11B2020/10675
Abstract: A system and method for improving the management of data input and output (I/O) operations for Shingled Magnetic Recording (SMR) devices in a network storage system is disclosed. The storage system includes a storage controller that receives a series of write requests for data blocks to be written to non-sequential addresses within a pool of SMR devices. The storage controller writes the data blocks from the series of write requests to a corresponding sequence of data clusters allocated within a first data cache of the storage controller for a thinly provisioned volume of the pool of SMR devices. Upon determining that a current utilization of the first data cache's data storage capacity exceeds a threshold, the sequence of data clusters including the data blocks from the first data cache are transferred to sequential physical addresses within the SMR devices.
-
公开(公告)号:US10664412B2
公开(公告)日:2020-05-26
申请号:US15796413
申请日:2017-10-27
Applicant: NETAPP, INC.
Inventor: Mahmoud K. Jibbe , Dean Lang , Scott Terrill , Matthew Buller , Jeffery Fowler
IPC: G06F12/12 , G06F12/128 , G06F12/0871
Abstract: Systems and methods that select a cache flushing algorithm are provided. A stripe that spans multiple storage devices and includes a plurality of segments is provided. The stripe also includes dirty data stored in a picket-fence pattern in at least a subset of segments in the plurality of segments. A memory cache that stores data separately from the plurality of storage devices and a metadata cache that stores metadata associated with the dirty data are also provided. A cache flushing algorithm is selected using the metadata. The selected cache flushing algorithm flushes data from the memory cache to the stripe.
-
公开(公告)号:US20180113738A1
公开(公告)日:2018-04-26
申请号:US15497744
申请日:2017-04-26
Applicant: NETAPP, INC.
Inventor: Charles E. Nichols , Scott Terrill , Don Humlicek , Arindam Banerjee , Yulu Diao , Anthony D. Gitchell
CPC classification number: G06F9/5027
Abstract: A system for dynamically configuring and scheduling input/output (I/O) workloads among processing cores is disclosed. Resources for an application that are related to each other and/or not multicore safe are grouped together into work nodes. When these need to be executed, the work nodes are added to a global queue that is accessible by all of the processing cores. Any processing core that becomes available can pull and process the next available work node through to completion, so that the work associated with that work node software object is all completed by the same core, without requiring additional protections for resources that are not multicore safe. Indexes track the location of both the next work node in the global queue for processing and the next location in the global queue for new work nodes to be added for subsequent processing.
-
公开(公告)号:US10826848B2
公开(公告)日:2020-11-03
申请号:US15497744
申请日:2017-04-26
Applicant: NETAPP, INC.
Inventor: Charles E. Nichols , Scott Terrill , Don Humlicek , Arindam Banerjee , Yulu Diao , Anthony D. Gitchell
IPC: G06F9/46 , H04L12/861 , G06F9/50 , G06F12/0842 , G06F9/48
Abstract: A system for dynamically configuring and scheduling input/output (I/O) workloads among processing cores is disclosed. Resources for an application that are related to each other and/or not multicore safe are grouped together into work nodes. When these need to be executed, the work nodes are added to a global queue that is accessible by all of the processing cores. Any processing core that becomes available can pull and process the next available work node through to completion, so that the work associated with that work node software object is all completed by the same core, without requiring additional protections for resources that are not multicore safe. Indexes track the location of both the next work node in the global queue for processing and the next location in the global queue for new work nodes to be added for subsequent processing.
-
公开(公告)号:US20170315913A1
公开(公告)日:2017-11-02
申请号:US15459866
申请日:2017-03-15
Applicant: NetApp, Inc.
Inventor: Mahmoud K. Jibbe , Keith Holt , Scott Terrill
IPC: G06F12/0802 , G06F12/10 , G11B5/012 , G11B20/12
CPC classification number: G06F12/0802 , G06F3/067 , G06F12/10 , G06F2212/1016 , G06F2212/202 , G06F2212/604 , G11B5/012 , G11B20/10527 , G11B2020/10657 , G11B2020/10675
Abstract: A system and method for improving the management of data input and output (I/O) operations for Shingled Magnetic Recording (SMR) devices in a network storage system is disclosed. The storage system includes a storage controller that receives a series of write requests for data blocks to be written to non-sequential addresses within a pool of SMR devices. The storage controller writes the data blocks from the series of write requests to a corresponding sequence of data clusters allocated within a first data cache of the storage controller for a thinly provisioned volume of the pool of SMR devices. Upon determining that a current utilization of the first data cache's data storage capacity exceeds a threshold, the sequence of data clusters including the data blocks from the first data cache are transferred to sequential physical addresses within the SMR devices.
-
公开(公告)号:US10521345B2
公开(公告)日:2019-12-31
申请号:US16113719
申请日:2018-08-27
Applicant: NetApp, Inc.
Inventor: Mahmoud K Jibbe , Keith Holt , Scott Terrill
IPC: G06F3/06 , G06F12/0802 , G11B5/012 , G06F12/10 , G11B20/10
Abstract: A system and method for improving the management of data input and output (I/O) operations for Shingled Magnetic Recording (SMR) devices in a network storage system is disclosed. The storage system includes a storage controller that receives a series of write requests for data blocks to be written to non-sequential addresses within a pool of SMR devices. The storage controller writes the data blocks from the series of write requests to a corresponding sequence of data clusters allocated within a first data cache of the storage controller for a thinly provisioned volume of the pool of SMR devices. Upon determining that a current utilization of the first data cache's data storage capacity exceeds a threshold, the sequence of data clusters including the data blocks from the first data cache are transferred to sequential physical addresses within the SMR devices.
-
公开(公告)号:US10073774B2
公开(公告)日:2018-09-11
申请号:US15459866
申请日:2017-03-15
Applicant: NetApp, Inc.
Inventor: Mahmoud K. Jibbe , Keith Holt , Scott Terrill
CPC classification number: G06F12/0802 , G06F3/067 , G06F12/10 , G06F2212/1016 , G06F2212/202 , G06F2212/604 , G11B5/012 , G11B20/10527 , G11B2020/10657 , G11B2020/10675
Abstract: A system and method for improving the management of data input and output (I/O) operations for Shingled Magnetic Recording (SMR) devices in a network storage system is disclosed. The storage system includes a storage controller that receives a series of write requests for data blocks to be written to non-sequential addresses within a pool of SMR devices. The storage controller writes the data blocks from the series of write requests to a corresponding sequence of data clusters allocated within a first data cache of the storage controller for a thinly provisioned volume of the pool of SMR devices. Upon determining that a current utilization of the first data cache's data storage capacity exceeds a threshold, the sequence of data clusters including the data blocks from the first data cache are transferred to sequential physical addresses within the SMR devices.
-
8.
公开(公告)号:US20190129863A1
公开(公告)日:2019-05-02
申请号:US15796413
申请日:2017-10-27
Applicant: NETAPP, INC.
Inventor: Mahmoud K. Jibbe , Dean Lang , Scott Terrill , Matthew Buller , Jeffery Fowler
IPC: G06F12/128 , G06F12/0871
Abstract: Systems and methods that select a cache flushing algorithm are provided. A stripe that spans multiple storage devices and includes a plurality of segments is provided. The stripe also includes dirty data stored in a picket-fence pattern in at least a subset of segments in the plurality of segments. A memory cache that stores data separately from the plurality of storage devices and a metadata cache that stores metadata associated with the dirty data are also provided. A cache flushing algorithm is selected using the metadata. The selected cache flushing algorithm flushes data from the memory cache to the stripe.
-
公开(公告)号:US10235288B2
公开(公告)日:2019-03-19
申请号:US14874157
申请日:2015-10-02
Applicant: NetApp, Inc.
Inventor: Arindam Banerjee , Donald R Humlicek , Scott Terrill
IPC: G06F12/00 , G06F12/0804 , G06F12/12 , G06F12/0868 , G06F12/0891 , G06F12/126
Abstract: Systems and techniques for cache management are disclosed that provide improved cache performance by prioritizing particular storage stripes for cache flush operations. The systems and techniques may also leverage features of the storage devices to provide atomicity without the overhead of inter-controller mirroring. In some embodiments, the systems and techniques include a storage controller that stores data in a cache. The data is associated with one or more sectors of a storage stripe that is defined over plurality of storage devices. The storage controller identifies a locality of dirty sectors of the one or more sectors, classifies the storage stripe into a category based on the locality, provides a category ordering of the category relative to at least one other category, and flushes the storage stripe from the cache to the plurality of storage devices according to the category ordering.
-
公开(公告)号:US20170097886A1
公开(公告)日:2017-04-06
申请号:US14874157
申请日:2015-10-02
Applicant: NetApp, Inc.
Inventor: Arindam Banerjee , Don Humlicek , Scott Terrill
CPC classification number: G06F12/0804 , G06F12/0868 , G06F12/0891 , G06F12/12 , G06F12/126 , G06F2212/604
Abstract: Systems and techniques for cache management are disclosed that provide improved cache performance by prioritizing particular storage stripes for cache flush operations. The systems and techniques may also leverage features of the storage devices to provide atomicity without the overhead of inter-controller mirroring. In some embodiments, the systems and techniques include a storage controller that stores data in a cache. The data is associated with one or more sectors of a storage stripe that is defined over plurality of storage devices. The storage controller identifies a locality of dirty sectors of the one or more sectors, classifies the storage stripe into a category based on the locality, provides a category ordering of the category relative to at least one other category, and flushes the storage stripe from the cache to the plurality of storage devices according to the category ordering.
-
-
-
-
-
-
-
-
-