Abstract:
In various embodiments, methods and systems are disclosed for a hybrid rate plus window based congestion protocol that controls the rate of packet transmission into the network and provides low queuing delay, practically zero packet loss, fair allocation of network resources amongst multiple flows, and full link utilization. In one embodiment, a congestion window may be used to control the maximum number of outstanding bits, a transmission rate may be used to control the rate of packets entering the network (packet pacing), a queuing delay based rate update may be used to control queuing delay within tolerated bounds and minimize packet loss, and aggressive ramp-up/graceful back-off may be used to fully utilize the link capacity and additive-increase, multiplicative-decrease (AIMD) rate control may be used to provide fairness amongst multiple flows.
Abstract:
Described is a technology by which a consistent hashing table of bins maintains values representing nodes of a distributed system. An assignment stage uses a consistent hashing function and a selection algorithm to assign values that represent the nodes to the bins. In an independent mapping stage, a mapping mechanism deterministically maps an object identifier/key to one of the bins as a mapped-to bin.
Abstract:
Difficulties associated with choosing advantageous network routes between server and clients are mitigated by a routing system that is devised to use many routing path sets, where respective sets comprise a number of routing paths covering all of the clients, including through other clients. A server may then apportion a data stream among all of the routing path sets. The server may also detect the performance of the computer network while sending the data stream between clients, and may adjust the apportionment of the routing path sets including the route. The clients may also be configured to operate as servers of other data streams, such as in a videoconferencing session, for example, and may be configured to send detected route performance information along with the portions of the various data streams.
Abstract:
An ISP-friendly rate allocation system and method that reduces network traffic across ISP boundaries in a peer-to-peer (P2P) network, Embodiments of the system and method continuously solve a global optimization problem and dictate accordingly how much bandwidth is allocated on each connection. Embodiments of the system and method minimize load on a server in communication with the P2P network, minimize ISP-unfriendly traffic while keeping the minimum server load unaffected, and maximize peer prefetching. Two different techniques are used to compute rate allocation, including a utility function optimization technique and a minimum cost flow formulation technique. The utility function optimization technique constructs a utility function and optimizes that utility function. The minimum cost flow formulation technique generates a minimum cost flow formulation using a bipartite graph have a vertices set and an edges set. A distributed minimum cost flow formulation is solved using Lagrangian multipliers.
Abstract:
Difficulties associated with choosing advantageous network routes between server and clients are mitigated by a routing system that is devised to use many routing path sets, where respective sets comprise a number of routing paths covering all of the clients, including through other clients. A server may then apportion a data stream among all of the routing path sets. The server may also detect the performance of the computer network while sending the data stream between clients, and may adjust the apportionment of the routing path sets including the route. The clients may also be configured to operate as servers of other data streams, such as in a videoconferencing session, for example, and may be configured to send detected route performance information along with the portions of the various data streams.
Abstract:
In one embodiment, a method for supporting recovery from failure of a path in a network of nodes interconnected by links. An intermediate node between an ingress point and an egress point of the network is selected to minimize the sum of (i) a capacity constraint between the ingress point and the intermediate node and (ii) a capacity constraint between the intermediate node and the egress point. The selection identifies two link-disjoint path sets, each comprising a backup path and at least one primary path, with a first path set between the ingress point and the intermediate node, and a second path set between the intermediate node and the egress point. To maximize network throughput, packets are routed in two phases, first to the intermediate node via the first path set in predetermined proportions, and then from the intermediate node to the final destination via the second path set.
Abstract:
In one embodiment, a method for supporting recovery from failure of a link in a network of nodes interconnected by links. An intermediate node between an ingress point and an egress point of the network is selected to minimize the sum of (i) a capacity constraint between the ingress point and the intermediate node and (ii) a capacity constraint between the intermediate node and the egress point. The selection identifies two path structures, each comprising a primary path and one or more link backup detours protecting each link on the primary path, with a first path structure between the ingress point and the intermediate node, and a second path structure between the intermediate node and the egress point. To maximize network throughput, packets are routed in two phases, first to the intermediate node via the first path structure in predetermined proportions, and then from the intermediate node to the final destination via the second path structure.
Abstract:
A given network of nodes that are interconnected by links having corresponding capacities has each link's capacity divided into working capacity and restoration capacity without a priori information about network traffic characteristics. Allocation of working capacity and restoration capacity for the network might be optimized by characterization of the network in accordance with a linear programming problem (LPP) subject to network constraints and then generating a solution to the LPP either exactly or with an approximation. Partitioning the capacity of each link in the network into working and restoration capacities minimizes the restoration capacity overhead in the network to allow for higher network utilization.
Abstract:
Improved p-cycle restoration techniques using a signaling protocol are disclosed. For example, a technique for use in at least one node of a data communication network for recovering from a failure, wherein the data communication network includes multiple nodes and multiple links for connecting the multiple nodes, comprises the following steps/operations. Notification of the failure is obtained at the at least one node. A determination is made whether the failure is a single link failure or one of a node failure and a multiple link failure. A pre-configured protection cycle (p-cycle) plan is implemented when the failure is a single link failure but not when the failure is one of a node failure and a multiple link failure, such that two independent paths in the network are not connected when implementing the pre-configured protection cycle plan. Implementation of the pre-configured protection cycle plan may further comprise the node at least one of sending at least one message to another node in the data communication network and receiving at least one message from another node in the data communication network.
Abstract:
A scheme for a carrier to route one or more packets of traffic to their destination after ensuring that they pass through a pre-determined intermediate node also in the carrier's domain permits the carrier to handle all permissible traffic patterns without knowledge of the traffic matrix, subject to edge-link capacity constraints. A method of routing data through a network of nodes interconnected by links and having at least one ingress point and at least one egress point, comprises the steps of: receiving a request for a path with a service demand for routing data between the ingress point and the egress point; selecting a set of one or more intermediate nodes between the ingress point and the egress point; determining, based on a bandwidth of said network, respective fractions of the data to send from the ingress point to each node of the set of one or more intermediate nodes; routing the data in the determined respective fractions from the ingress point to each node of the set of one or more intermediate nodes; and routing the data from each node of the set of one or more intermediate nodes to the egress point.