摘要:
A computer-implemented method for validating ownership of deduplicated data may include (1) identifying a request from a remote client to store a data object in a data store that already includes an instance of the data object, (2) in response to the request, verifying that the remote client possesses the data object by (i) issuing a randomized challenge to the remote client, the randomized challenge including a random value which, when combined with at least a portion of the data object, produces an authentication token demonstrating possession of the data object and, in response to the randomized challenge, (ii) receiving the authentication token from the remote client; and, in response to receiving the authentication token from the remote client, (3) storing the data object in the data store on behalf of the remote client. Various other methods and systems are also disclosed.
摘要:
A system and method for backing up files to a single-instance storage system are disclosed. The files may be split into segments, and the file data may be stored in the single-instance storage system as individual segments. The single-instance storage system uses the concept of a file region which covers multiple segments of the file. If a region of a file is unchanged from one backup to the next, the system may use a region object to refer to the unchanged region. This avoids the need to update the reference information for each of the segments within the region, thus increasing the efficiency of backing up the new version of the file.
摘要:
Containers that store data objects that were written to those containers during a particular backup are accessed. Then, a subset of the containers is identified; the containers in the subset have less than a threshold number of data objects associated with the particular backup. Data objects that are in containers in that subset and that are associated with the backup are copied to one or more other containers. Those other containers are subsequently used to restore data objects associated with the backup.
摘要:
A system and method for efficiently reducing latency of accessing an index for a data segment stored on a server. A server both removes duplicate data and prevents duplicate data from being stored in a shared data storage. The file server is coupled to an index storage subsystem holding fingerprint and pointer value pairs corresponding to a data segment stored in the shared data storage. The pairs are stored in a predetermined order. The file server utilizes an ordered binary search tree to identify a particular block of multiple blocks within the index storage subsystem corresponding to a received memory access request. The index storage subsystem determines whether an entry corresponding to the memory access request is located within the identified block. Based on at least this determination, the file server processes the memory access request accordingly. In one embodiment, the index storage subsystem is a solid-state disk (SSD).
摘要:
Containers that store data objects that were written to those containers during a particular backup are accessed. Then, a subset of the containers is identified; the containers in the subset have less than a threshold number of data objects associated with the particular backup. Data objects that are in containers in that subset and that are associated with the backup are copied to one or more other containers. Those other containers are subsequently used to restore data objects associated with the backup.
摘要:
A system and method for efficiently reducing a number of duplicate blocks of stored data. A file server both removes duplicate data and prevents duplicate data from being stored in the shared storage. A sampling rate may be used to determine which fingerprints, or hash values, are stored in an index. The sampling rate may be modified in response to changes in characteristics of the system, such as a change in the shared storage size, a change in a utilization of the shared storage, a change in the size of the storage unit, and reaching a threshold corresponding to utilization of the index. Also, a small cache may be maintained for holding fingerprint and pointer pair values prefetched from the shared storage. Each prefetched pair may be associated with data corresponding to a previous hit in the index. The association may be related to spatial locality, temporal locality, or otherwise.
摘要:
A computer-implemented method for neutralizing file-format-specific exploits contained within electronic communications may include (1) identifying an electronic communication, (2) identifying at least one file contained within the electronic communication, and then (3) neutralizing any file-format-specific exploits contained within the file. In one example, neutralizing any file-format-specific exploits contained within the file may include applying at least one file-format-conversion operation to the file. Additionally or alternatively, neutralizing any file-format-specific exploits contained within the file may include constructing a sterile version of the file that selectively omits at least a portion of any exploitable content contained within the file. Various other methods, systems, and computer-readable media are also disclosed.
摘要:
Various embodiments of a network protocol that utilizes a congestion control algorithm that distinguishes between congestion loss and damage loss are described. In response to a packet loss on a network, a delay-based detection algorithm may be performed based on RTT (Round-Trip Time) information to determine whether the network is congested. If the delay-based detection algorithm does not determine that the network is congested then a consistency-based detection algorithm may be performed based on packet loss rate information. If either the delay-based detection algorithm or the consistency-based detection algorithm determine that the network is congested then the rate of data transmission may be reduced, e.g., by reducing a congestion window size.
摘要:
A computer-implemented method for providing increased scalability in deduplication storage systems may include (1) identifying a database that stores a plurality of reference objects, (2) determining that at least one size-related characteristic of the database has reached a predetermined threshold, (3) partitioning the database into a plurality of sub-databases capable of being updated independent of one another, (4) identifying a request to perform an update operation that updates one or more reference objects stored within at least one sub-database, and then (5) performing the update operation on less than all of the sub-databases to avoid processing costs associated with performing the update operation on all of the sub-databases. Various other systems, methods, and computer-readable media are also disclosed.
摘要:
A system and method for managing a resource reclamation reference list at a coarse level. A storage device is configured to store a plurality of storage objects in a plurality of storage containers, each of said storage containers being configured to store a plurality of said storage objects. A storage container reference list is maintained, wherein for each of the storage containers the storage container reference list identifies which files of a plurality of files reference a storage object within a given storage container. In response to detecting deletion of a given file that references an object within a particular storage container of the storage containers, a server is configured to update the storage container reference list by removing from the storage container reference list an identification of the given file. A reference list associating segment objects with files that reference those segment objects may not be updated response to the deletion.