Abstract:
A method, non-transitory computer readable medium, and archive node computing device that receives an indication of each of a plurality of archived files required to service a job from one of a plurality of compute node computing devices of an analytics tier. An optimized schedule for retrieving the archived files from one or more archive storage devices of an archive tier is generated. The optimized schedule is provided to the one of the plurality of compute node computing devices. Requests for the archived files received from the one of the plurality of compute node computing device and at least one other of the plurality of compute node computing devices, wherein the requests are sent according to the optimized schedule.
Abstract:
Methods and apparatuses for efficiently migrating deduplicated data are provided. In one example, a data management system includes a data storage volume, a memory including machine executable instructions, and a computer processor. The data storage volume includes data objects and free storage space. The computer processor executes the instructions to perform deduplication of the data objects and determine migration efficiency metrics for groups of the data objects. Determining the migration efficiency metrics includes determining, for each group, a relationship between the free storage space that will result if the group is migrated from the volume and the resources required to migrate the group from the volume.
Abstract:
A method and system for deduplicating data for a data storage system using similarity determinations are described. A tape library is arranged in a hierarchy of tape groups and tape plexes. Tape groups are an admin visible entity and are comprised of multiple tape plexes (at least equal to the number of replicas in a tape group). Tape plexes in turn comprise multiple tape cartridges. Data files and objects received within a time period are initially staged in a disk cache where they are logically segregated into cliques based on their expected deduplication ratios. These cliques are then evaluated for the amount of duplication they have with data existing in tape plexes. Based on the number of replicas being written, the top few tape plexes are selected from within the tape group. The cliques are deduplicated with data on the selected tape plexes, compressed, and written to tape.
Abstract:
A method, device and non-transitory computer readable medium that manages read access includes organizing a plurality of requests for objects on one or more storage media, such as tapes or spin-down disks, based on at least a deadline for each of the plurality of requests. One of one or more replicas for each of the objects on the one or more storage media is selected based on one or more factors. An initial schedule for read access is generated based at least on the deadline for each of the plurality of requests, the selected one of the replicas for each of the objects, and availability of one or more drives. The initial schedule for read access on the one or more of the drives for each of the plurality of requests for the objects is provided.
Abstract:
A rebuild node of a storage system can assess risk of the storage system not being able to provide a data object. The rebuild node(s) uses information about data object fragments to determine health of a data object, which relates to the risk assessment. The rebuild node obtains object fragment information from nodes throughout the storage system. With the object fragment information, the rebuild node(s) can assess object risk based, at least in part, on the object fragments indicated as existing by the nodes. To assess object risk, the rebuild node(s) treats absent object fragments (i.e., those for which an indication was not received) as lost. When too many object fragments are lost, an object cannot be rebuilt. The erasure coding technique dictates the threshold number of fragments for rebuilding an object. The risk assessment per object influences rebuild of the objects.
Abstract:
Methods and apparatuses for efficiently migrating deduplicated data are provided. In one example, a data management system includes a data storage volume, a memory including machine executable instructions, and a computer processor. The data storage volume includes data objects and free storage space. The computer processor executes the instructions to perform deduplication of the data objects and determine migration efficiency metrics for groups of the data objects. Determining the migration efficiency metrics includes determining, for each group, a relationship between the free storage space that will result if the group is migrated from the volume and the resources required to migrate the group from the volume.
Abstract:
Methods and systems for a distributed database cluster storing a plurality of replicas of a databases are provided. One method includes locating by a processor, a timestamp of a last stored record in a backup copy of the database from a plurality of logical partitions for a point in time restore operation; identifying by the processor, an operation log for each logical partition with the last stored record, the operation log providing transaction details associated with the database; splitting by the processor, the operation log for each logical partition by ignoring transactions that occurred prior to the timestamp of the last stored record; and using by the processor, the split operation log for restoring the database to the point in time.
Abstract:
A method and system for deduplicating data for a data storage system using similarity determinations are described. A tape library is arranged in a hierarchy of tape groups and tape plexes. Tape groups are an admin visible entity and are comprised of multiple tape plexes (at least equal to the number of replicas in a tape group). Tape plexes in turn comprise multiple tape cartridges. Data files and objects received within a time period are initially staged in a disk cache where they are logically segregated into cliques based on their expected deduplication ratios. These cliques are then evaluated for the amount of duplication they have with data existing in tape plexes. Based on the number of replicas being written, the top few tape plexes are selected from within the tape group. The cliques are deduplicated with data on the selected tape plexes, compressed, and written to tape.
Abstract:
Methods and systems for a distributed database cluster storing a plurality of replicas of a databases are provided. One method includes locating by a processor, a timestamp of a last stored record in a backup copy of the database from a plurality of logical partitions for a point in time restore operation; identifying by the processor, an operation log for each logical partition with the last stored record, the operation log providing transaction details associated with the database; splitting by the processor, the operation log for each logical partition by ignoring transactions that occurred prior to the timestamp of the last stored record; and using by the processor, the split operation log for restoring the database to the point in time.
Abstract:
A rebuild node of a storage system can assess risk of the storage system not being able to provide a data object. The rebuild node(s) uses information about data object fragments to determine health of a data object, which relates to the risk assessment. The rebuild node obtains object fragment information from nodes throughout the storage system. With the object fragment information, the rebuild node(s) can assess object risk based, at least in part, on the object fragments indicated as existing by the nodes. To assess object risk, the rebuild node(s) treats absent object fragments (i.e., those for which an indication was not received) as lost. When too many object fragments are lost, an object cannot be rebuilt. The erasure coding technique dictates the threshold number of fragments for rebuilding an object. The risk assessment per object influences rebuild of the objects.