Uniquely naming storage devices in a global storage environment

    公开(公告)号:US10282137B2

    公开(公告)日:2019-05-07

    申请号:US14832410

    申请日:2015-08-21

    Applicant: NETAPP, INC.

    Abstract: The present invention uniquely names storage devices in a global storage environment with hierarchical storage domains. In particular, according to one or more embodiments of the present invention a storage device (e.g., a disk) is connected at a particular location within the global storage environment. That particular location is associated with a path of each of one or more hierarchical storage domains in which the storage device is located. Accordingly, a name is assigned to the storage device that is the path of the hierarchical storage domains in which the storage device is located.

    Methods and systems for offloading RAID parity reconstruction

    公开(公告)号:US09940196B2

    公开(公告)日:2018-04-10

    申请号:US15135265

    申请日:2016-04-21

    Applicant: NETAPP, INC.

    CPC classification number: G06F11/1088

    Abstract: Methods and systems for a storage environment are provided. For example, one method includes receiving a request from a storage server at an offload engine for reconstructing data lost due to a failed storage device of a parity group having a plurality of storage devices; retrieving data and parity by the offload engine from the parity group storage devices that are operational; determining by the offload engine XOR of the retrieved data and parity; presenting XOR of data and parity by the offload engine to the storage server with context information associated with the retrieved data; and reconstructing lost data by the storage server using the XOR of data and parity and the context information provided by the offload engine.

    THIRD VOTE CONSENSUS IN A CLUSTER USING SHARED STORAGE DEVICES

    公开(公告)号:US20180074924A1

    公开(公告)日:2018-03-15

    申请号:US15813941

    申请日:2017-11-15

    Applicant: NetApp, Inc.

    CPC classification number: G06F11/2033 G06F11/1425 G06F11/2046 G06F2201/805

    Abstract: A third vote consensus technique enables a first node, i.e., a surviving node, of a two-node cluster to establish a quorum and continue to operate in response to failure of a second node of the cluster. Each node maintains configuration information organized as a cluster database (CDB) which may be changed according to a consensus-based protocol. Changes to the CDB are logged on a third copy file system (TCFS) stored on a local copy of TCFS (L-TCFS). A shared copy of the TCFS (i.e., S-TCFS) may be stored on shared storage devices of one or more storage arrays coupled to the nodes. The local copy of the TCFS (i.e., L-TCFS) represents a quorum vote for each node of the cluster, while the S-TCFS represents an additional “tie-breaker” vote of a consensus-based protocol. The additional vote may be obtained from the shared storage devices by the surviving node as a third vote to establish the quorum and enable the surviving node to cast two of three votes (i.e., a majority of votes) needed to continue operation of the cluster. That is, the majority of votes allows the surviving node to update the CDB with the configuration information changes so as to continue proper operation of the cluster.

    Third vote consensus in a cluster using shared storage devices

    公开(公告)号:US09836366B2

    公开(公告)日:2017-12-05

    申请号:US14924318

    申请日:2015-10-27

    Applicant: NetApp, Inc.

    CPC classification number: G06F11/2033 G06F11/1425 G06F11/2046 G06F2201/805

    Abstract: A third vote consensus technique enables a first node, i.e., a surviving node, of a two-node cluster to establish a quorum and continue to operate in response to failure of a second node of the cluster. Each node maintains configuration information organized as a cluster database (CDB) which may be changed according to a consensus-based protocol. Changes to the CDB are logged on a third copy file system (TCFS) stored on a local copy of TCFS (L-TCFS). A shared copy of the TCFS (i.e., S-TCFS) may be stored on shared storage devices of one or more storage arrays coupled to the nodes. The local copy of the TCFS (i.e., L-TCFS) represents a quorum vote for each node of the cluster, while the S-TCFS represents an additional “tie-breaker” vote of a consensus-based protocol. The additional vote may be obtained from the shared storage devices by the surviving node as a third vote to establish the quorum and enable the surviving node to cast two of three votes (i.e., a majority of votes) needed to continue operation of the cluster. That is, the majority of votes allows the surviving node to update the CDB with the configuration information changes so as to continue proper operation of the cluster.

    METHODS AND SYSTEMS FOR OFFLOADING RAID PARITY RECONSTRUCTION

    公开(公告)号:US20170308435A1

    公开(公告)日:2017-10-26

    申请号:US15135265

    申请日:2016-04-21

    Applicant: NETAPP, INC.

    CPC classification number: G06F11/1088

    Abstract: Methods and systems for a storage environment are provided. For example, one method includes receiving a request from a storage server at an offload engine for reconstructing data lost due to a failed storage device of a parity group having a plurality of storage devices; retrieving data and parity by the offload engine from the parity group storage devices that are operational; determining by the offload engine XOR of the retrieved data and parity; presenting XOR of data and parity by the offload engine to the storage server with context information associated with the retrieved data; and reconstructing lost data by the storage server using the XOR of data and parity and the context information provided by the offload engine.

    THIRD VOTE CONSENSUS IN A CLUSTER USING SHARED STORAGE DEVICES

    公开(公告)号:US20170116095A1

    公开(公告)日:2017-04-27

    申请号:US14924318

    申请日:2015-10-27

    Applicant: NetApp, Inc.

    CPC classification number: G06F11/2033 G06F11/1425 G06F11/2046 G06F2201/805

    Abstract: A third vote consensus technique enables a first node, i.e., a surviving node, of a two-node cluster to establish a quorum and continue to operate in response to failure of a second node of the cluster. Each node maintains configuration information organized as a cluster database (CDB) which may be changed according to a consensus-based protocol. Changes to the CDB are logged on a third copy file system (TCFS) stored on a local copy of TCFS (L-TCFS). A shared copy of the TCFS (i.e., S-TCFS) may be stored on shared storage devices of one or more storage arrays coupled to the nodes. The local copy of the TCFS (i.e., L-TCFS) represents a quorum vote for each node of the cluster, while the S-TCFS represents an additional “tie-breaker” vote of a consensus-based protocol. The additional vote may be obtained from the shared storage devices by the surviving node as a third vote to establish the quorum and enable the surviving node to cast two of three votes (i.e., a majority of votes) needed to continue operation of the cluster. That is, the majority of votes allows the surviving node to update the CDB with the configuration information changes so as to continue proper operation of the cluster.

    Method and system for transparently replacing nodes of a clustered storage system
    7.
    发明授权
    Method and system for transparently replacing nodes of a clustered storage system 有权
    用于透明地替代集群存储系统的节点的方法和系统

    公开(公告)号:US09378258B2

    公开(公告)日:2016-06-28

    申请号:US13961086

    申请日:2013-08-07

    Applicant: NETAPP, INC.

    Abstract: Method and system for replacing a first node and a second of a clustered storage system by a third node and a fourth node are provided. The method includes migrating all storage objects managed by the first node to the second node; replacing the first node by the third node and migrating all the storage objects managed by the first node and the second node to the third node; and replacing the second node by the fourth node and then migrating the storage objects previously managed by the second node but currently managed by the third node to the fourth node. The nodes may also be replaced by operationally connecting the third node and the fourth node to storage managed by the first node and the second node; joining the third node and the fourth node to a same cluster as the first node and the second node.

    Abstract translation: 提供了用于由第三节点和第四节点替换集群存储系统的第一节点和第二节点的方法和系统。 该方法包括将由第一节点管理的所有存储对象迁移到第二节点; 由第三节点替换第一节点并将由第一节点和第二节点管理的所有存储对象迁移到第三节点; 并且由第四节点替换第二节点,然后将先前由第二节点管理但是当前由第三节点管理的存储对象迁移到第四节点。 也可以通过将第三节点和第四节点可操作地连接到由第一节点和第二节点管理的存储来替换节点; 将第三节点和第四节点加入与第一节点和第二节点相同的簇。

    METHOD AND APPARATUS FOR DECOMPOSING I/O TASKS IN A RAID SYSTEM
    8.
    发明申请
    METHOD AND APPARATUS FOR DECOMPOSING I/O TASKS IN A RAID SYSTEM 审中-公开
    用于在RAID系统中分解I / O任务的方法和装置

    公开(公告)号:US20140173198A1

    公开(公告)日:2014-06-19

    申请号:US14137084

    申请日:2013-12-20

    Applicant: NetApp, Inc.

    Abstract: A data access request to a file system is decomposed into a plurality of lower-level I/O tasks. A logical combination of physical storage components is represented as a hierarchical set of objects. A parent I/O task is generated from a first object in response to the data access request. A child I/O task is generated from a second object to implement a portion of the parent I/O task. The parent I/O task is suspended until the child I/O task completes. The child I/O task is executed in response to an occurrence of an event that a resource required by the child I/O task is available. The parent I/O task is resumed upon an event indicating completion of the child I/O task. Scheduling of any child I/O task is not conditional on execution of the parent I/O task, and a state diagram regulates the child I/O tasks.

    Abstract translation: 对文件系统的数据访问请求被分解成多个较低级的I / O任务。 物理存储组件的逻辑组合被表示为一组分层对象。 响应于数据访问请求,从第一对象生成父I / O任务。 从第二个对象生成子I / O任务,以实现父I / O任务的一部分。 父I / O任务被暂停,直到子I / O任务完成。 响应于发生子I / O任务所需的资源可用的事件,执行子I / O任务。 在指示完成子I / O任务的事件后,将恢复父I / O任务。 任何子I / O任务的调度不是执行父I / O任务的条件,而状态图则规定了子I / O任务。

    Third vote consensus in a cluster using shared storage devices

    公开(公告)号:US10664366B2

    公开(公告)日:2020-05-26

    申请号:US15813941

    申请日:2017-11-15

    Applicant: NetApp, Inc.

    Abstract: A third vote consensus technique enables a first node, i.e., a surviving node, of a two-node cluster to establish a quorum and continue to operate in response to failure of a second node of the cluster. Each node maintains configuration information organized as a cluster database (CDB) which may be changed according to a consensus-based protocol. Changes to the CDB are logged on a third copy file system (TCFS) stored on a local copy of TCFS (L-TCFS). A shared copy of the TCFS (i.e., S-TCFS) may be stored on shared storage devices of one or more storage arrays coupled to the nodes. The local copy of the TCFS (i.e., L-TCFS) represents a quorum vote for each node of the cluster, while the S-TCFS represents an additional “tie-breaker” vote of a consensus-based protocol. The additional vote may be obtained from the shared storage devices by the surviving node as a third vote to establish the quorum and enable the surviving node to cast two of three votes (i.e., a majority of votes) needed to continue operation of the cluster. That is, the majority of votes allows the surviving node to update the CDB with the configuration information changes so as to continue proper operation of the cluster.

    HIGH AVAILABILITY FAILOVER MANAGER
    10.
    发明申请

    公开(公告)号:US20170351589A1

    公开(公告)日:2017-12-07

    申请号:US15687062

    申请日:2017-08-25

    Applicant: NetApp, Inc.

    Abstract: A high availability (HA) failover manager maintains data availability of one or more input/output (I/O) resources in a cluster by ensuring that each I/O resource is available (e.g., mounted) on a hosting node of the cluster and that each I/O resource may be available on one or more partner nodes of the cluster if a node (i.e., a local node) were to fail. The HA failover manager (HA manager) processes inputs from various sources of the cluster to determine whether failover is enabled for a local node and each partner node in an HA group, and for triggering failover of the I/O resources to the partner node as necessary. For each I/O resource, the HA manager may track state information including (i) a state of the I/O resource (e.g., mounted or un-mounted); (ii) the partner node(s) ability to service the I/O resource; and (iii) whether a non-volatile log recording I/O requests is synchronized to the partner node(s). The HA manager interacts with various layers of a storage I/O stack to mount and un-mount the I/O resources on one or more nodes of the cluster through the use of well-defined interfaces, e.g., application programming interfaces.

Patent Agency Ranking