CACHE AFFINITY AND PROCESSOR UTILIZATION TECHNIQUE

    公开(公告)号:US20180067784A1

    公开(公告)日:2018-03-08

    申请号:US15806852

    申请日:2017-11-08

    Applicant: NetApp, Inc.

    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.

    METHOD FOR LOW OVERHEAD, SPACE TRACKING, HIGH PERFORMANCE SNAPSHOTS AND CLONES BY TRANSFER OF EXTENT OWNERSHIP

    公开(公告)号:US20170315878A1

    公开(公告)日:2017-11-02

    申请号:US15143370

    申请日:2016-04-29

    Applicant: NetApp, Inc.

    Abstract: A technique efficiently manages a snapshot and/or clone by a volume layer of a storage input/output (I/O) stack executing on one or more nodes of the cluster. According to the technique, an ownership attribute is included in metadata entries of a dense tree data structure for extents that eliminates otherwise needed reference count operations for the snapshots and reduces reference count operations for the clones. Illustratively, a copy of a parent dense tree level created by a copy-on-write (COW) operation is referred to as a “derived level”, whereas the existing level of the parent dense tree is referred to as a “source level”. The source level may be persistently linked to the derived level by keeping “level identifying key information” in a respective dense tree source level header. Moreover, two different types of dense tree derivations are defined: a derive relationship and a reverse-derive relationship.

    TECHNIQUE FOR PACING AND BALANCING PROCESSING OF INTERNAL AND EXTERNAL I/O REQUESTS IN A STORAGE SYSTEM

    公开(公告)号:US20170315740A1

    公开(公告)日:2017-11-02

    申请号:US15143324

    申请日:2016-04-29

    Applicant: NetApp, Inc.

    Abstract: A technique paces and balances a flow of messages related to processing of input/output (I/O) requests between subsystems, such as layers of a storage input/output (I/O) stack, of one or more nodes of a cluster. The I/O requests may be directed to externally-generated user data, e.g., write requests generated by a host coupled to the cluster, and internally-generated metadata, e.g., write and delete requests generated by a volume layer of the storage I/O stack. The user data (and metadata) may be organized as an arbitrary number of variable-length extents of one or more host-visible logical units (LUNs) served by the nodes. The metadata may include mappings from host-visible logical block address ranges (i.e., offset ranges) of a LUN to extent keys, which reference locations of the extents stored on storage devices, such as solid state drivers (SSDs), of a storage array coupled to the nodes. The I/O requests are received at a pacer of the volume layer configured to control delivery of the requests to an extent store layer of the storage I/O stack in a policy-dictated manner to enable processing and sequential storage of the user data and metadata on the SSDs of the storage array.

    CACHE AFFINITY AND PROCESSOR UTILIZATION TECHNIQUE
    4.
    发明申请
    CACHE AFFINITY AND PROCESSOR UTILIZATION TECHNIQUE 有权
    缓存优化和处理器利用技术

    公开(公告)号:US20160246655A1

    公开(公告)日:2016-08-25

    申请号:US15051947

    申请日:2016-02-24

    Applicant: NetApp, Inc.

    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.

    Abstract translation: 高速缓存关联性和处理器利用技术有效地将工作负载在节点的多个处理器和相关联的处理器核心之间的存储输入/输出(I / O)堆栈中。 存储I / O堆栈采用执行非阻塞消息处理程序(即非阻塞服务)的一个或多个非阻塞消息传递内核(MK)线程。 技术负载平衡在共享最后一级高速缓存(LLC)的处理器内核(即LLC内部处理器负载平衡)之间工作,负载平衡在具有单独的LLC(即LLC间处理器负载平衡)的处理器之间工作。 该技术可以分配预定数量的由MK调度器使用的逻辑处理器来调度存储I / O堆栈内的非阻塞服务,以及分配剩余数量的逻辑处理器以供阻止服务使用,例如调度 由操作系统内核调度程序。

    Cache affinity and processor utilization technique

    公开(公告)号:US10162686B2

    公开(公告)日:2018-12-25

    申请号:US15806852

    申请日:2017-11-08

    Applicant: NetApp, Inc.

    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.

    Cache affinity and processor utilization technique

    公开(公告)号:US09842008B2

    公开(公告)日:2017-12-12

    申请号:US15051947

    申请日:2016-02-24

    Applicant: NetApp, Inc.

    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.

Patent Agency Ranking