Perturb key technique
    1.
    发明授权

    公开(公告)号:US10216966B2

    公开(公告)日:2019-02-26

    申请号:US15052332

    申请日:2016-02-24

    申请人: NetApp, Inc.

    IPC分类号: G06F21/78

    摘要: A technique perturbs an extent key to compute a candidate extent key in the event of a collision with metadata (i.e., two extents having different data that yield identical hash values) stored in a memory of a node in a cluster. The perturbing technique may be used to compute a candidate extent key that is not previously stored in an extent store instance. The candidate extent key may be computed from a hash value of an extent using a perturbing algorithm, i.e., a hash collision computation, which illustratively adds a perturb value to the hash value. The perturb value is illustratively sufficient to ensure that the candidate extent key resolves to a same hash bucket and node (extent store instance) as the original extent key. In essence, the technique ensures that the original extent key is perturbed in a deterministic manner to generate the candidate extent key, so that the original extent and candidate extent key “decode” to the same hash bucket and extent store instance.

    Granular sync/semi-sync architecture

    公开(公告)号:US10135922B2

    公开(公告)日:2018-11-20

    申请号:US15844705

    申请日:2017-12-18

    申请人: NetApp Inc.

    IPC分类号: G06F17/30 H04L29/08 G06F3/06

    摘要: Data consistency and availability can be provided at the granularity of logical storage objects in storage solutions that use storage virtualization in clustered storage environments. To ensure consistency of data across different storage elements, synchronization is performed across the different storage elements. Changes to data are synchronized across storage elements in different clusters by propagating the changes from a primary logical storage object to a secondary logical storage object. To satisfy the strictest RPOs while maintaining performance, change requests are intercepted prior to being sent to a filesystem that hosts the primary logical storage object and propagated to a different managing storage element associated with the secondary logical storage object.

    CACHE AFFINITY AND PROCESSOR UTILIZATION TECHNIQUE

    公开(公告)号:US20180067784A1

    公开(公告)日:2018-03-08

    申请号:US15806852

    申请日:2017-11-08

    申请人: NetApp, Inc.

    IPC分类号: G06F9/50 G06F12/084

    摘要: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.

    FILE SYSTEM DRIVEN RAID REBUILD TECHNIQUE
    5.
    发明申请
    FILE SYSTEM DRIVEN RAID REBUILD TECHNIQUE 审中-公开
    文件系统驱动RAID重建技术

    公开(公告)号:US20160274973A1

    公开(公告)日:2016-09-22

    申请号:US15166600

    申请日:2016-05-27

    申请人: NetApp, Inc.

    摘要: Embodiments described herein are directed to a file system driven RAID rebuild technique. A layered file system may organize storage of data as segments spanning one or more sets of storage devices, such as solid state drives (SSDs), of a storage array, wherein each set of SSDs may form a RAID group configured to provide data redundancy for a segment. The file system may then drive (i.e., initiate) rebuild of a RAID configuration of the SSDs on a segment-by-segment basis in response to cleaning of the segment (i.e., segment cleaning). Each segment may include one or more RAID stripes that provide a level of data redundancy (e.g., single parity RAID 5 or double parity RAID 6) as well as RAID organization (i.e., distribution of data and parity) for the segment. Notably, the level of data redundancy and RAID organization may differ among the segments of the array.

    摘要翻译: 本文描述的实施例涉及文件系统驱动的RAID重建技术。 分层文件系统可以将数据的存储组织为跨越存储阵列的一组或多组存储设备(例如固态驱动器(SSD))的段,其中每组SSD可以形成RAID组,其被配置为提供数据冗余 一段 然后文件系统可以响应于段的清除(即,段清除)逐个段地驱动(即,启动)重建SSD的RAID配置。 每个段可以包括一个或多个提供数据冗余级别(例如,单个奇偶校验RAID 5或双奇偶校验RAID 6)的RAID条带以及用于该段的RAID组织(即,数据和奇偶校验的分配)。 值得注意的是,数据冗余和RAID组织的级别可能在阵列的各个部分之间不同。

    Set-associative hash table organization for efficient storage and retrieval of data in a storage system
    6.
    发明授权
    Set-associative hash table organization for efficient storage and retrieval of data in a storage system 有权
    集合关联哈希表组织,用于存储系统中的数据的高效存储和检索

    公开(公告)号:US09256549B2

    公开(公告)日:2016-02-09

    申请号:US14158608

    申请日:2014-01-17

    申请人: NetApp, Inc.

    IPC分类号: G06F17/30 G06F12/10 G06F3/06

    摘要: In one embodiment, an extent key reconstruction technique is provided for use with a set of hash tables embodying metadata. The metadata includes an extent key associated with a storage location on storage devices for write data of one or more write requests organized into an extent. Each hash table has a plurality of entries, and each entry includes a plurality of slots. A first field of the extent key is recreated implicitly from an entry in a first address space portion of a hash table. A second field of the extent key is stored in the slot. A third field of the extent key is stored in the slot. A fourth field of the extent key is recreated implicitly from the hash table of the set of hash tables.

    摘要翻译: 在一个实施例中,提供了扩展密钥重建技术,用于与体现元数据的一组哈希表一起使用。 元数据包括与存储设备上的存储位置相关联的扩展密钥,用于组织成一个范围的一个或多个写入请求的写入数据。 每个散列表具有多个条目,并且每个条目包括多个时隙。 从哈希表的第一地址空间部分中的条目隐式地重建扩展密钥的第一字段。 扩展密钥的第二个字段存储在插槽中。 扩展密钥的第三个字段存储在插槽中。 扩展密钥的第四个字段是从哈希表集合的散列表中隐式重新创建的。

    NVRAM loss handling
    7.
    发明授权

    公开(公告)号:US10789134B2

    公开(公告)日:2020-09-29

    申请号:US15130280

    申请日:2016-04-15

    申请人: NetApp, Inc.

    IPC分类号: G06F11/14

    摘要: A technique restores a file system of a storage input/output (I/O) stack to a deterministic point-in-time state in the event of failure (loss) of non-volatile random access memory (NVRAM) of a node. The technique enables restoration of the file system to a safepoint stored on storage devices, such solid state drives (SSD), of the node with minimum data and metadata loss. The safepoint is a point-in-time during execution of I/O requests (e.g., write operations) at which data and related metadata of the write operations prior to the point-in-time are safely persisted on SSD such that the metadata relating to an image of the file system on SSD (on-disk) is consistent and complete. Upon reboot after NVRAM loss, the technique identifies (i) the most recent safepoint, as well as (ii) the inflight writes that were persistently stored on disk after the most recent safepoint. The data and metadata of those inflight writes are then deleted to place the on-disk file system to its state at the most recent safepoint.

    Cache affinity and processor utilization technique

    公开(公告)号:US10162686B2

    公开(公告)日:2018-12-25

    申请号:US15806852

    申请日:2017-11-08

    申请人: NetApp, Inc.

    摘要: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.

    Cache affinity and processor utilization technique

    公开(公告)号:US09842008B2

    公开(公告)日:2017-12-12

    申请号:US15051947

    申请日:2016-02-24

    申请人: NetApp, Inc.

    IPC分类号: G06F9/46 G06F9/50 G06F12/084

    摘要: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.