Abstract:
One or more techniques and/or devices are provided for storage virtual machine relocation (e.g., ownership change) between storage clusters. For example, operational statistics of a first storage cluster and a second storage cluster may be evaluated to identify a set of load balancing metrics. Ownership of one or more storage aggregates and/or one or more storage virtual machines may be changed (e.g., permanently changed for load balancing purposes or temporarily changed for disaster recovery purposes) between the first storage cluster and the second storage cluster utilizing zero-copy ownership change operations based upon the set of load balancing metrics. For example, if the first storage cluster is experiencing a relatively heavier load of client I/O operations and the second storage cluster has available resources, ownership of a storage aggregate and a storage virtual machine may be switched from the first storage cluster to the second storage cluster for load balancing.
Abstract:
A primary write request that is to modify a primary portion of primary data stored in a primary storage node is received. The primary write request is to be replicated to create a current secondary write request. The current secondary write request is to modify a current secondary portion of secondary data that is stored in a secondary storage node. A current data range of the current secondary portion is determined. A determination is made of whether a previous secondary write request is in process of modifying a previous data range that at least partially overlaps with a current data range of the current secondary portion. Execution of the primary write request is suspended, until the previous secondary write request has completed updating the secondary storage node.
Abstract:
Data consistency and availability can be provided at the granularity of logical storage objects in storage solutions that use storage virtualization in clustered storage environments. To ensure consistency of data across different storage elements, synchronization is performed across the different storage elements. Changes to data are synchronized across storage elements in different clusters by propagating the changes from a primary logical storage object to a secondary logical storage object. To satisfy the strictest RPOs while maintaining performance, change requests are intercepted prior to being sent to a filesystem that hosts the primary logical storage object and propagated to a different managing storage element associated with the secondary logical storage object.
Abstract:
A primary write request that is to modify a primary portion of primary data stored in a primary storage node is received. The primary write request is to be replicated to create a current secondary write request. The current secondary write request is to modify a current secondary portion of secondary data that is stored in a secondary storage node. A current data range of the current secondary portion is determined. A determination is made of whether a previous secondary write request is in process of modifying a previous data range that at least partially overlaps with a current data range of the current secondary portion. Execution of the primary write request is suspended, until the previous secondary write request has completed updating the secondary storage node.
Abstract:
Data consistency and availability can be provided at the granularity of logical storage objects in storage solutions that use storage virtualization in clustered storage environments. To ensure consistency of data across different storage elements, synchronization is performed across the different storage elements. Changes to data are synchronized across storage elements in different clusters by propagating the changes from a primary logical storage object to a secondary logical storage object. To satisfy the strictest RPOs while maintaining performance, change requests are intercepted prior to being sent to a filesystem that hosts the primary logical storage object and propagated to a different managing storage element associated with the secondary logical storage object.
Abstract:
During a storage redundancy giveback from a first node to a second node following a storage redundancy takeover from the second node by the first node, the second node is initialized in part by receiving a node identification indicator from the second node. The node identification indicator is included in a node advertisement message sent by the second node during a giveback wait phase of the storage redundancy giveback. The node identification indicator includes an intra-cluster node connectivity identifier that is used by the first node to determine whether the second node is an intra-cluster takeover partner. In response to determining that the second node is an intra-cluster takeover partner, the first node completes the giveback of storage resources to the second node.
Abstract:
During a storage redundancy giveback from a first node to a second node following a storage redundancy takeover from the second node by the first node, the second node is initialized in part by receiving a node identification indicator from the second node. The node identification indicator is included in a node advertisement message sent by the second node during a giveback wait phase of the storage redundancy giveback. The node identification indicator includes an intra-cluster node connectivity identifier that is used by the first node to determine whether the second node is an intra-cluster takeover partner. In response to determining that the second node is an intra-cluster takeover partner, the first node completes the giveback of storage resources to the second node.
Abstract:
A primary write request that is to modify a primary portion of primary data stored in a primary storage node is received. The primary write request is to be replicated to create a current secondary write request. The current secondary write request is to modify a current secondary portion of secondary data that is stored in a secondary storage node. A current data range of the current secondary portion is determined. A determination is made of whether a previous secondary write request is in process of modifying a previous data range that at least partially overlaps with a current data range of the current secondary portion. Execution of the primary write request is suspended, until the previous secondary write request has completed updating the secondary storage node.
Abstract:
Data consistency and availability can be provided at the granularity of logical storage objects in storage solutions that use storage virtualization in clustered storage environments. To ensure consistency of data across different storage elements, synchronization is performed across the different storage elements. Changes to data are synchronized across storage elements in different clusters by propagating the changes from a primary logical storage object to a secondary logical storage object. To satisfy the strictest RPOs while maintaining performance, change requests are intercepted prior to being sent to a filesystem that hosts the primary logical storage object and propagated to a different managing storage element associated with the secondary logical storage object.
Abstract:
A storage object is migrated between nodes by a source node automatically verifying that another node is configured to service the storage object and changing ownership of the storage object based on the verifying. A cluster manager for the clustered storage system receives a request and provides the request to the source which owns the storage object. The source verifies that the destination is configured according to a predetermined configuration for servicing the storage object. Based on the verifying, the source offlines the storage object and updates ownership information of the storage object, thereafter allowing the destination to online the storage object. The cluster manager further provides the updated ownership information to all the nodes in the cluster, so an access request intended for the storage object may be received by any node and forwarded to the destination using the updated ownership information to effect a transparent migration.