摘要:
Peer-to-peer communications sessions involve the transmission of one or more data streams from a source to a set of receivers that may redistribute portions of the data stream via a set of routing trees. Achieving a comparatively high, sustainable data rate throughput of the data stream(s) may be difficult due to the large number of available routing trees, as well as pertinent variations in the nature of the communications session (e.g., upload communications caps, network link caps, the presence or absence of helpers, and the full or partial interconnectedness of the network.) The selection of routing trees may be facilitated through the representation of the node set according to a linear programming model, such as a primal model or a linear programming dual model, and iterative processes for applying such models and identifying low-cost routing trees during an iteration.
摘要:
A number of techniques are described for routing methods that improve resistance to faults affecting groups of links subject to common risks. One of these techniques accounts for failure potentials in physical networks by considering shared risk link groups separately from performance and costs metrics in determining a primary routing path and a backup path. A shared risk link group (SRLG) is an attribute attached to a link to identify edges that have physical links in common and can therefore be simultaneously disrupted due to a single fault. Another technique considers node disjointness and provides a solution of two paths that are as node disjoint as possible and minimizes administrative costs. The techniques may further be combined in a priority order thereby providing a solution of at least two paths that are strictly SRLG disjoint, as node-disjoint as possible, and have minimum administrative costs. Due to the priority order of evaluation and typical network physical configurations of links, with the links associated common fault SRLGs, the priority ordering technique is very efficient in determining at least two paths for routing between a source and destination node.
摘要:
A method for supporting recovery from failure of a node in a network of nodes interconnected by links, wherein the failed node is in a path providing a service level between an ingress point and an egress point of the network, comprises: (a) selecting a set of one or more intermediate nodes between the ingress point and the egress point, the set excluding the failed node; (b) determining, based on available bandwidth of the network, a non-zero fraction of the service level to route from the ingress point to each intermediate node; (c) implementing, during a first routing phase, a first routing method to determine one or more paths from the ingress point to each intermediate node for routing the corresponding fraction of the service level; and (d) implementing, during a second routing phase, a second routing method to determine one or more paths from each intermediate node to the egress point for routing the corresponding fraction of the service level.
摘要:
A packet network of interconnected nodes employs a method of routing with service-level guarantees to determine a path through the network for a requested multicast, label-switched path Each of the nodes includes one or more routers that forward packets based on a forwarding table constructed from a directed tree determined in accordance with the method of multicast routing with service-level guarantees. For a first implementation, a heuristic algorithm uses a scaling phase that iteratively adjusts a maximum arc capacity, determines the resulting tree for the iteration, and selects the tree as the routing tree that provides the “maximum” flow. For a second implementation, the heuristic algorithm computes maximum multicast flows and determines links in the network that are “critical” to satisfy future multicast routing requests. A multicast routing tree is selected such that provisioning the flows over its links “minimally interferes” with capacity of paths needed for future demands.
摘要:
The subject disclosure is directed towards a data deduplication technology in which a hash index service's index is partitioned into subspace indexes, with less than the entire hash index service's index cached to save memory. The subspace index is accessed to determine whether a data chunk already exists or needs to be indexed and stored. The index may be divided into subspaces based on criteria associated with the data to index, such as file type, data type, time of last usage, and so on. Also described is subspace reconciliation, in which duplicate entries in subspaces are detected so as to remove entries and chunks from the deduplication system. Subspace reconciliation may be performed at off-peak time, when more system resources are available, and may be interrupted if resources are needed. Subspaces to reconcile may be based on similarity, including via similarity of signatures that each compactly represents the subspace's hashes.
摘要:
Described is using flash memory, RAM-based data structures and mechanisms to provide a flash store for caching data items (e.g., key-value pairs) in flash pages. A RAM-based index maps data items to flash pages, and a RAM-based write buffer maintains data items to be written to the flash store, e.g., when a full page can be written. A recycle mechanism makes used pages in the flash store available by destaging a data item to a hard disk or reinserting it into the write buffer, based on its access pattern. The flash store may be used in a data deduplication system, in which the data items comprise chunk-identifier, metadata pairs, in which each chunk-identifier corresponds to a hash of a chunk of data that indicates. The RAM and flash are accessed with the chunk-identifier (e.g., as a key) to determine whether a chunk is a new chunk or a duplicate.
摘要:
In various embodiments, methods and systems are disclosed for a hybrid rate plus window based congestion protocol that controls the rate of packet transmission into the network and provides low queuing delay, practically zero packet loss, fair allocation of network resources amongst multiple flows, and full link utilization. In one embodiment, a congestion window may be used to control the maximum number of outstanding bits, a transmission rate may be used to control the rate of packets entering the network (packet pacing), a queuing delay based rate update may be used to control queuing delay within tolerated bounds and minimize packet loss, and aggressive ramp-up/graceful back-off may be used to fully utilize the link capacity and additive-increase, multiplicative-decrease (AIMD) rate control may be used to provide fairness amongst multiple flows.
摘要:
Described is a technology by which a consistent hashing table of bins maintains values representing nodes of a distributed system. An assignment stage uses a consistent hashing function and a selection algorithm to assign values that represent the nodes to the bins. In an independent mapping stage, a mapping mechanism deterministically maps an object identifier/key to one of the bins as a mapped-to bin.
摘要:
Difficulties associated with choosing advantageous network routes between server and clients are mitigated by a routing system that is devised to use many routing path sets, where respective sets comprise a number of routing paths covering all of the clients, including through other clients. A server may then apportion a data stream among all of the routing path sets. The server may also detect the performance of the computer network while sending the data stream between clients, and may adjust the apportionment of the routing path sets including the route. The clients may also be configured to operate as servers of other data streams, such as in a videoconferencing session, for example, and may be configured to send detected route performance information along with the portions of the various data streams.
摘要:
In one embodiment, a method for supporting recovery from failure of a node in a network of nodes interconnected by links A set of two or more intermediate nodes (excluding the failed node) between an ingress point and an egress point is selected. Next, based on available bandwidth of the network, a non-zero fraction of the service level to route from the ingress point to each intermediate node is determined. Packets are then routed in two phases by: (1) determining one or more paths from the ingress point to each intermediate node for routing the corresponding fraction of the service level, and (2) determining one or more paths from each intermediate node to the egress point for routing the corresponding fraction of the service level.