Abstract:
The described implementations relate to distributed network management and more particularly to enhancing distributed network utility. One technique selects multiple trees to distribute content to multiple receivers in a session where individual receivers can receive the distributed content at one of a plurality of rates. The technique further adjustably allocates content distribution across the multiple trees to increase a sum of utilities of the multiple receivers.
Abstract:
The subject disclosure is directed towards a data deduplication technology in which a hash index service's index and/or indexing operations are adaptable to balance deduplication performance savings, throughput and resource consumption. The indexing service may employ hierarchical chunking using different levels of granularity corresponding to chunk size, a sampled compact index table that contains compact signatures for less than all of the hash index's (or subspace's) hash values, and/or selective subspace indexing based on similarity of a subspace's data to another subspace's data and/or to incoming data chunks.
Abstract:
Techniques are described for sharing content among peers. Locality domains are treated as first order network units. Content is located at the level of a locality domain using a hierarchical DHT in which nodes correspond to locality domains. A peer searches for a given piece of content in a proximity guided manner and terminates at the earliest locality domain (in the hierarchy) which has the content. Locality domains are organized into hierarchical clusters based on their proximity.
Abstract:
Difficulties associated with choosing advantageous network routes between server and clients are mitigated by a routing system that is devised to use many routing path sets, where respective sets comprise a number of routing paths covering all of the clients, including through other clients. A server may then apportion a data stream among all of the routing path sets. The server may also detect the performance of the computer network while sending the data stream between clients, and may adjust the apportionment of the routing path sets including the route. The clients may also be configured to operate as servers of other data streams, such as in a videoconferencing session, for example, and may be configured to send detected route performance information along with the portions of the various data streams.
Abstract:
The described implementations relate to distributed network management and more particularly to enhancing distributed network utility. One technique selects multiple trees to distribute content to multiple receivers in a session where individual receivers can receive the distributed content at one of a plurality of rates. The technique further adjustably allocates content distribution across the multiple trees to increase a sum of utilities of the multiple receivers.
Abstract:
Described is using flash memory (or other secondary storage), RAM-based data structures and mechanisms to access key-value pairs stored in the flash memory using only a low RAM space footprint. A mapping (e.g. hash) function maps key-value pairs to a slot in a RAM-based index. The slot includes a pointer that points to a bucket of records on flash memory that each had keys that mapped to the slot. The bucket of records is arranged as a linear-chained linked list, e.g., with pointers from the most-recently written record to the earliest written record. Also described are compacting non-contiguous records of a bucket onto a single flash page, and garbage collection. Still further described is load balancing to reduce variation in bucket sizes, using a bloom filter per slot to avoid unnecessary searching, and splitting a slot into sub-slots.
Abstract:
The subject disclosure is directed towards partitioning a file into chunks that satisfy a chunk size restriction, such as maximum and minimum chunk sizes, using a sliding window. For file positions within the chunk size restriction, a signature representative of a window fingerprint is compared with a target pattern, with a chunk boundary candidate identified if matched. Other signatures and patterns are then checked to determine a highest ranking signature (corresponding to a lowest numbered Rule) to associate with that chunk boundary candidate, or set an actual boundary if the highest ranked signature is matched. If the maximum chunk size is reached without matching the highest ranked signature, the chunking mechanism regresses to set the boundary based on the candidate with the next highest ranked signature (if no candidates, the boundary is set at the maximum). Also described is setting chunk boundaries based upon pattern detection (e.g., runs of zeros).
Abstract:
Described is using flash memory, RAM-based data structures and mechanisms to provide a flash store for caching data items (e.g., key-value pairs) in flash pages. A RAM-based index maps data items to flash pages, and a RAM-based write buffer maintains data items to be written to the flash store, e.g., when a full page can be written. A recycle mechanism makes used pages in the flash store available by destaging a data item to a hard disk or reinserting it into the write buffer, based on its access pattern. The flash store may be used in a data deduplication system, in which the data items comprise chunk-identifier, metadata pairs, in which each chunk-identifier corresponds to a hash of a chunk of data that indicates. The RAM and flash are accessed with the chunk-identifier (e.g., as a key) to determine whether a chunk is a new chunk or a duplicate.
Abstract:
Transmission delays are minimized when packets are transmitted from a source computer over a network to a destination computer. The source computer measures the network's available bandwidth, forms a sequence of output packets from a sequence of data packets, and transmits the output packets over the network to the destination computer, where the transmission rate is ramped up to the measured bandwidth. In conjunction with the transmission, the source computer monitors a transmission delay indicator which it computes using acknowledgement packets it receives from the destination computer. Whenever the indicator specifies that the transmission delay is increasing, the source computer reduces the transmission rate until the indicator specifies that the delay is unchanged. The source computer dynamically decides whether each output packet will be a forward error correction packet or a single data packet, where the decision is based on minimizing the expected transmission delays.
Abstract:
The subject disclosure is directed towards a data deduplication technology in which a hash index service's index is partitioned into subspace indexes, with less than the entire hash index service's index cached to save memory. The subspace index is accessed to determine whether a data chunk already exists or needs to be indexed and stored. The index may be divided into subspaces based on criteria associated with the data to index, such as file type, data type, time of last usage, and so on. Also described is subspace reconciliation, in which duplicate entries in subspaces are detected so as to remove entries and chunks from the deduplication system. Subspace reconciliation may be performed at off-peak time, when more system resources are available, and may be interrupted if resources are needed. Subspaces to reconcile may be based on similarity, including via similarity of signatures that each compactly represents the subspace's hashes.