摘要:
For scalable data deduplication working with small data chunks in a computing environment, for each of the small data chunks, a signature is generated based on a combination of a representation of characters that appear in the small data chunks with a representation of frequencies of the small data chunks. The signature is used to help in selecting the data to be deduplicated.
摘要:
For scalable data deduplication working with small data chunks in a computing environment, for each of the small data chunks, a signature is generated based on a combination of a representation of characters that appear in the small data chunks with a representation of frequencies of the small data chunks. The signature is used to help in selecting the data to be deduplicated.
摘要:
Exemplary method, system, and computer program product embodiments for an incremental modification of an error detection code operation are provided. In one embodiment, by way of example only, for a data block requiring a first error detection code (EDC) value to be calculated and verified and is undergoing modification for at least one randomly positioned sub-blocks that becomes available and modified in independent time intervals, a second EDC value is calculated for each of the randomly positioned sub-blocks. An incremental effect of the second EDC value is applied for calculating the first EDC value and for recalculating the first EDC value upon replacing at least one of the randomly positioned sub-blocks. The resource consumption is proportional to the size of at least one of the randomly positioned sub-blocks that are added and modified. Additional system and computer program product embodiments are disclosed and provide related advantages.
摘要:
A deduplication storage system enables new input data to be deduplicated with data of synthetic backups already constructed, and for this purpose efficiently calculates deduplication digests for synthetic backups being constructed, based on already existing digests of data referenced by the synthetic backups. For each input data segment of the plurality of input data segments of a synthetic backup being constructed, a plurality of deduplication digests of stored data segments, referenced by the input data segment, is retrieved from an index. Each input data segment is partitioned into each of a plurality of fixed-sized data sub-segments. A calculation is performed producing a deduplication digest for a data sub-segment, where the calculation is based on the retrieved deduplication digests of the plurality of stored data sub-segments referenced by the input data sub-segment.
摘要:
A remainder by division of a sequence of bytes interpreted as a first number by a second number is calculated. A first remainder by division associated with a first subset of the sequence of bytes is calculated with a first processor. A second remainder by division associated with a second subset of the sequence of bytes is calculated with a second processor. The calculating of the second remainder by division may occur at least partially during the calculating of the first remainder by division. A third remainder by division is calculated based on the calculating of the first remainder by division and the calculating of the second remainder by division.
摘要:
A remainder by division of a sequence of bytes interpreted as a first number by a second number is calculated. A first remainder by division associated with a first subset of the sequence of bytes is calculated with a first processor. A second remainder by division associated with a second subset of the sequence of bytes is calculated with a second processor. The calculating of the second remainder by division may occur at least partially during the calculating of the first remainder by division. A third remainder by division is calculated based on the calculating of the first remainder by division and the calculating of the second remainder by division.
摘要:
A deduplication storage system enables new input data to be deduplicated with data of synthetic backups already constructed, and for this purpose efficiently calculates deduplication digests for synthetic backups being constructed, based on already existing digests of data referenced by the synthetic backups. For each input data segment of the plurality of input data segments of a synthetic backup being constructed, a plurality of deduplication digests of stored data segments, referenced by the input data segment, is retrieved from an index. Each input data segment is partitioned into each of a plurality of fixed-sized data sub-segments. A calculation is performed producing a deduplication digest for a data sub-segment, where the calculation is based on the retrieved deduplication digests of the plurality of stored data sub-segments referenced by the input data sub-segment.
摘要:
Methods, computer systems, and computer program products for calculating a remainder by division of a sequence of bytes interpreted as a first number by a second number is provided. A pseudo-remainder by division associated with a first subsequence of the sequence of bytes is calculated. A property of this pseudo-remainder is that the first subsequence of the sequence of bytes, interpreted as a third number, and the pseudo-remainder by division have the same remainder by division when divided by the second number. A second subsequence of the sequence of bytes interpreted as the first number is appended to the pseudo-remainder, interpreted as a sequence of bytes, so as to create a sequence of bytes interpreted as a fourth number. The first number and the fourth number have the same remainder by division when divided by the second number.
摘要:
Systems and methods for transactional processing within a clustered file system wherein user defined transactions operate on data segments of the file system data. The users are provided within an interface for using a transactional mechanism, namely services for opening, writing and rolling-back transactions. A distributed shared memory technology is utilized to facilitate efficient and coherent cache management within the clustered file system based on the granularity of data segments (rather than files).
摘要:
Systems, Methods, and Computer Program Products are provided for performing concurrent checkpoints from file system agents residing on different nodes within in a clustered file system (CFS). Responsibility to checkpoint a modified and a committed data segment to a final storage location is assigned to one of the file system agents. One of the file system agents, which is assigned, is the file system agent whose associated distributed shared memory (DSM) agent is an owner of the data segment.