Abstract:
Examples perform live migration of objects such as VMs from a source host to a destination host. The disclosure exposes the contents of the storage disk at the destination host, compares the storage disk of the destination host to the source host, and during migration, migrates only data which is not already stored at the destination host. The source and destination VMs have concurrent access to storage disks during migration. After migration, the destination VM executes, with exclusive access to the storage disks.
Abstract:
Exemplary methods, apparatuses, and systems include a recovery manager receiving selection of a storage profile to be protected. The storage profile is an abstraction of a set of one or more logical storage devices that are treated as a single entity based upon common storage capabilities. In response to the selection of the storage profile to be protected, a set of virtual datacenter entities associated with the storage profile is added to a disaster recovery plan to automate a failover of the set of virtual datacenter entities from a protection site to a recovery site. The set of one or more virtual datacenter entities includes one or more virtual machines, one or more logical storage devices, or a combination of virtual machines and logical storage devices. The set of virtual datacenter entities is expandable and interchangeable with other virtual datacenter entities.
Abstract:
The disclosure describes performing live migration of objects such as virtual machines (VMs) from a source host to a destination host. The disclosure changes the storage environment, directly or through a vendor provider, to active/passive synchronous or near synchronous and, during migration, migrates only data which has not already been replicated at the destination host. The source and destination VMs have concurrent access to storage disks during migration. After migration, the destination VM executes with exclusive access to the storage disks, and the system is returned to the previous storage environment of active/passive asynchronous.
Abstract:
Examples perform live migration of VMs from a source host to a destination host using destructive consistency breaking operations. The disclosure makes a record of a consistency group of VMs on storage at a source host as a fail-back in the event of failure. The source VMs are live migrated to the destination host, disregarding consistency during live migration, and potentially violating the recovery point objective. After live migration of all of the source VMs, consistency is automatically restored at the destination host and the live migration is declared a success.
Abstract:
A recovery manager discovers replication properties of datastores stored in a storage array, and assigns custom tags to the datastores indicating the discovered replication properties. A user may create storage profiles with rules using any combination of these custom tags describe replication properties. The recovery manager protects a storage profile using a policy-based protection mechanism. Whenever a new replicated datastore is provisioned, the datastore is dynamically tagged with the replication properties of their underlying storage, and will belong to one or more storage profiles. The recovery manager monitors storage profiles for new datastores and protects the newly provisioned datastore dynamically, including any or all of the VMs stored in the datastore.
Abstract:
A recovery system and method for performing site recovery utilizes recovery-specific metadata and files of protected clients at a primary site to recreate the protected clients at a secondary site. The recovery-specific metadata is collected from at least one component at the primary site, and stored with the files of protected clients at the primary site. The recovery-specific metadata and the files of the protected clients are replicated to the secondary site so that the protected clients can be recreated at the secondary site using the replicated information.
Abstract:
Solutions for managing archived storage include receiving, at a first node, a snapshot comprising object data (e.g., a virtual machine disk snapshot) from a second node (e.g., a software defined data center), and storing the snapshot in a tiered structure that includes a data tier and a metadata tier. Snapshots may be used for fail-over operations and/or backups, to support disaster recovery. The data tier comprises a log-structured file system (LFS), and the metadata tier comprises a content addressable storage (CAS) identifying addresses within the LFS. The metadata tier also comprises a logical layer indicating content in the CAS. Segment cleaning of the data tier is performed using a segment usage table (SUT). Some examples include performing a fail-over operation from the second node to a third node using at least the stored snapshot for workload recovery. In some examples, the CAS comprises a log-structured merge-tree (LSM-tree).
Abstract:
A distributed system and method for error handling testing of a target component in the distributed system uses a proxy gateway in the target component that can intercept communications to and from remote components of the distributed system. When a proxy mode of the proxy gateway in the target component is enabled, at least one of the communications at the proxy gateway is modified to introduce an error. When the proxy mode of the proxy gateway in the target component is disabled, the communications to and from the remote components of the distributed system are transmitted via the proxy gateway without modification.
Abstract:
Examples perform live migration of VMs from a source host to a destination host. The disclosure changes the storage environment, directly or through a vendor provider, to active/active synchronous and, during migration, migrates only data which is not already stored at the destination host. The source and destination VMs have concurrent access to storage disks during migration. After migration, the destination VM executes, with exclusive access to the storage disks, and the system is returned to the previous storage environment (e.g., active/active asynchronous).
Abstract:
Exemplary methods, apparatuses, and systems include determining that at least a portion of a protected site has become unavailable. A first logical storage device within underlying storage of a recovery site is determined to be a stretched storage device stretched across the protected and recovery sites. A failover workflow is initiated in response to the unavailability of the protected site, wherein the failover workflow includes transmitting an instruction to the underlying storage to isolate the first logical storage device from a corresponding logical storage device within the protected site.