Abstract:
User data of different snapshots for the same virtual disk are stored in the same storage object. Similarly, metadata of different snapshots for the same virtual disk are stored in the same storage object, and log data of different snapshots for the same virtual disk are stored in the same storage object. As a result, the number of different storage objects that are managed for snapshots do not increase proportionally with the number of snapshots taken. In addition, any one of the multitude of persistent storage back-ends can be selected as the storage back-end for the storage objects according to user preference, system requirement, snapshot policy, or any other criteria. Another advantage is that the storage location of the read data can be obtained with a single read of the metadata storage object, instead of traversing metadata files of multiple snapshots.
Abstract:
A method for restoring a data volume using incremental snapshots of the data volume includes creating a first series of incremental snapshots according to a first predefined interval. The method further includes creating a second series of incremental snapshots according to a second predefined interval that is an integer multiple of the first predefined interval. The method also includes receiving a request to restore the data volume to a point-in-time. The method further includes restoring the data volume to the point-in-time using none or some of the snapshots in the first series that were created at or prior to the point-in-time, and all of the snapshots in the second series that were created at or prior to the point-in-time.
Abstract:
A network-based method for managing locks in a shared file system (SFS) for a group of hosts that does not require any configuration to identify a server for managing locks for the SFS. Each host in the group carries out the steps of checking a predetermined storage location to determine whether there is a host ID written in the predetermined location. If there is no host ID written in the predetermined location, the first host to notice this condition writes its host ID in the predetermined location to identify itself as the server for managing locks. If there is a host ID written in the predetermined location, the host ID of the server for managing locks is maintained in local memory. When the host needs to perform IO operations on a file of the SFS, it communicates with the server for managing locks over the network using the host ID of the server for managing locks to obtain a lock to the file.
Abstract:
User data of different snapshots for the same virtual disk are stored in the same storage object. Similarly, metadata of different snapshots for the same virtual disk are stored in the same storage object, and log data of different snapshots for the same virtual disk are stored in the same storage object. As a result, the number of different storage objects that are managed for snapshots do not increase proportionally with the number of snapshots taken. In addition, any one of the multitude of persistent storage back-ends can be selected as the storage back-end for the storage objects according to user preference, system requirement, snapshot policy, or any other criteria. Another advantage is that the storage location of the read data can be obtained with a single read of the metadata storage object, instead of traversing metadata files of multiple snapshots.
Abstract:
Multiple servers sharing a distributed file system are used to perform copies of regions of a source file in parallel from a source storage unit to corresponding temporary files at a destination storage unit. These temporary files are then merged or combined into a single file at the destination storage unit in a way that preserves the inode structure and attributes of the source file. A substantial speedup is obtained by copying regions of the file in parallel.
Abstract:
A method of acquiring a lock by a node, on a shared resource in a system of a plurality of interconnected nodes, is disclosed. Each node that competes for a lock on the shared resource maintains a list of locks currently owned by the node. A lock metadata is maintained on a shared storage that is accessible to all nodes that may compete for locks on shared resources. A heartbeat region is maintained on a shared resource corresponding to each node so nodes can register their liveness. A lock state is maintained in the lock metadata in the shared storage. A lock state may indicate lock held exclusively, lock free or lock in managed mode. If the lock is held in the managed mode, the ownership of the lock can be transferred to another node without a use of a mutual exclusion primitive such as the SCSI reservation.