Cache affinity and processor utilization technique

    公开(公告)号:US10162686B2

    公开(公告)日:2018-12-25

    申请号:US15806852

    申请日:2017-11-08

    Applicant: NetApp, Inc.

    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.

    Online backup to an object service using bulk export

    公开(公告)号:US10127117B2

    公开(公告)日:2018-11-13

    申请号:US15820586

    申请日:2017-11-22

    Applicant: NetApp, Inc.

    Abstract: A system and method for improving storage system performance by maintaining data integrity during bulk export to a cloud system is provided. A backup host reads a selected volume from the storage system via an I/O channel. The storage system remains online during bulk export and tracks I/O to the selected volume in a tracking log. The backup host compresses, encrypts, and calculates a checksum for each data block of the volume before writing a corresponding data object to export devices and sending a checksum data object to the cloud system. The devices are shipped to the cloud system, which imports the data objects and calculates a checksum for each. The storage system compares the imported checksums with the checksums in the checksum data object, and adds data blocks to the tracking log when errors are detected. An incremental backup is performed based on the contents of the tracking log.

    SYSTEMS AND METHODS FOR EXECUTING PROCESSOR EXECUTABLE APPLICATIONS

    公开(公告)号:US20180324216A1

    公开(公告)日:2018-11-08

    申请号:US15588402

    申请日:2017-05-05

    Applicant: NETPP, INC.

    CPC classification number: H04L63/20 G06F21/6218 H04L63/08

    Abstract: Methods and systems for executing an application by a computing device are provided. One method includes generating an operating policy for a processor executable application based on a licensing term; associating an identifier for storing the operating policy in a data structure external to the application; providing the operating policy to the application using an application programming interface (API) for controlling execution of the application; and executing the application using the operating policy.

    EFFICIENT DISTRIBUTED SCHEDULER FOR A DATA PARTITIONED SYSTEM

    公开(公告)号:US20180314551A1

    公开(公告)日:2018-11-01

    申请号:US15583932

    申请日:2017-05-01

    Applicant: NETAPP, INC.

    Abstract: Presented herein are methods, non-transitory computer readable media, and devices for optimizing thread assignment to schedulers, avoid starvation of individual data partitions, and maximize parallelism in the presence of hierarchical data partitioning are disclosed, which include: partitioning, by a network storage server, a scheduler servicing a data partitioned system into a plurality of autonomous schedulers; determining what fraction of thread resources in the data partitioned system at least one of the plurality of autonomous schedulers is to receive; and determining, with minimal synchronization, when it is time to allow the at least one of the plurality of autonomous schedulers servicing a coarse hierarchy to run.

    Resource allocation in networked storage systems

    公开(公告)号:US10078473B2

    公开(公告)日:2018-09-18

    申请号:US15057952

    申请日:2016-03-01

    Applicant: NETAPP, INC.

    Abstract: Methods and systems for a storage environment are provided. A policy for an input/output (I/O) stream having a plurality of I/O requests for accessing storage at a storage device of the storage sub-system is translated into flow attributes so that the I/O stream can be assigned to one of a plurality of queues maintained for placing I/O requests based on varying priorities defined by set polices. When an I/O request for the associated policy is received by the storage sub-system; the storage sub-system determines a flow attribute associated with the I/O request and the policy; selects a queue for staging the I/O request, such that the selected queue is of either higher priority than what is indicated by the flow attribute or at least of a same priority as indicated by the flow attribute; and allocates storage sub-system resource for processing the received I/O request.

    Techniques for estimating ability of nodes to support high availability functionality in a storage cluster system

    公开(公告)号:US10031822B2

    公开(公告)日:2018-07-24

    申请号:US15141357

    申请日:2016-04-28

    Applicant: NETAPP, INC.

    Abstract: Various embodiments are generally directed to techniques for determining whether one node of a HA group is able to take over for another. An apparatus includes a model derivation component to derive a model correlating node usage level to node data propagation latency through and to node resource utilization from a first model of a first node of a storage cluster system and a second model of a second node of the storage cluster system, the first model based on a first usage level of the first node under a first usage type, and the second model based on a second usage level of the second node under a second usage type; and an analysis component to determine whether the first node is able to take over for the second node based on applying to the derived model a total usage level derived from the first and second usage levels.

Patent Agency Ranking