摘要:
A scheduling method for a multi-level class hierarchy includes inserting all queues containing at least one packet in a first scheduler and inserting into a second scheduler queues contained in the first scheduler which do not exceed their maximum rate. The first scheduler is dequeued until a queue exceeding a maximum rate of the queue is reached, at which time a queue of the second scheduler is dequeued.
摘要:
Schedules may use burst tolerance values to adjust the scheduling in a time-based schedule, such as, but not limited to, adjusting for accumulated but not used bandwidth, and/or adjusting eligibility of schedule entries. A best schedule item associated with an eligible schedule entry of a schedule is identified. Whether or not a particular schedule entry is eligible is typically determined based on the relationship of an associated timestamp with a current scheduling time, such as its timestamp being less than or equal to the current time. A burst tolerance time bound might also be used to allow certain priorities and/or types of items to be considered eligible if even its timestamp exceeds the current time by an amount, but less than or equal to the burst tolerance time bound. When a schedule entry which has been dormant becomes active, its one or more timestamps are typically initialized, which may include setting at least one of these timestamps behind current time by a wakeup burst tolerance value to guarantee its immediate eligibility for one or more consecutive scheduling iterations.
摘要:
A rate-based scheduling system and method are disclosed. The rate-based system generally includes a first scheduler operable to limit the maximum rates at each of the plurality of queues. The first scheduler is configured as a work conserving scheduler shaped at an aggregate rate of active queues of the plurality of queues. The system further includes a second scheduler operable to provide a minimum rate to each of the plurality of queues and a rate controller configured to modulate the rate of at least one of the first and second schedulers.
摘要:
In one embodiment, a primary tunnel is established from a head-end node to a destination along a path including one or more protected network elements for which a fast reroute path is available to pass traffic around the one or more network elements in the event of their failure. A first path quality measures path quality prior to failure of the one or more protected network elements. A second path quality measures path quality subsequent to failure of the one or more protected network elements, while the fast reroute path is being used to pass traffic of the primary tunnel. A determination is made whether to reestablish the primary tunnel over a new path that does not include the one or more failed protected network elements, or to continue to utilize the path with the fast reroute path, in response to a difference between the first path quality and the second path quality.
摘要:
In one embodiment, an apparatus generally comprises one or more input interfaces for receiving a plurality of flows, a plurality of output interfaces, and a processor operable to identify large flows and select one of the output interfaces for each of the large flows to load-balance the large flows over the output interfaces. The apparatus further includes memory for storing a list of the large flows, a pinning mechanism for pinning the large flows to the selected interfaces, and a load-balance mechanism for selecting one of the output interfaces for each of the remaining flows. A method for local placement of large flows to assist in load-balancing is also disclosed.
摘要:
In one embodiment, an apparatus includes a processor for mapping packets associated with network flows to policy profiles independent of congestion level at the apparatus, and enforcing the policy profiles for the packets based on a congestion state. Packets associated with the same network flow are mapped to the same policy profile and at least some of the network flows are protected during network congestion. The apparatus further includes memory for storing the policy profiles. A method for protecting network flows during network congestion is also disclosed.
摘要:
In one embodiment, an intermediate node computes paths for a set of tunnels that do not include a particular link (e.g., and possibly a scaled-down bandwidth for each tunnel), considering all of the tunnels of the set. The intermediate node informs head-end nodes of the tunnels of the computed paths (e.g., and scaled bandwidth) and/or a time to reroute the tunnels.
摘要:
In one embodiment, head-end nodes receive a list of tunnels to be rerouted from a particular link of an intermediate node. If a head-end node is unable to reroute a tunnel for which it is the head-end node using conventional distributed routing, each head-end node executes the same algorithm to compute paths for all tunnels in the list (e.g., potentially applying bandwidth scaling).
摘要:
A virtual overlay backup network is established to provide Fast Reroute capability with guaranteed bandwidth protection to a network that employs end-to-end circuits such as label switched paths (LSPs). In some implementations, backup bandwidth is allocated from an available backup bandwidth pool, as defined herein, available on each link. Complete bandwidth protection may be provided rapidly upon detection of a failure while available backup bandwidth is shared between independent failures. In one embodiment, this is accomplished by provisioning backup tunnels to protect all links and nodes, wherein total available backup bandwidth on any link is not exceeded by the requirements of backup tunnels protecting any single node but backup tunnels protecting different nodes may share bandwidth.
摘要:
A technique dynamically determines whether to reestablish a Fast Rerouted primary tunnel based on path quality feedback of a utilized backup tunnel in a computer network. According to the novel technique, a head-end node establishes a primary tunnel to a destination, and a point of local repair (PLR) node along the primary tunnel establishes a backup tunnel around one or more protected network elements of the primary tunnel, e.g., for Fast Reroute protection. Once one of the protected network elements fail, the PLR node “Fast Reroutes,” i.e., diverts, the traffic received on the primary tunnel onto the backup tunnel, and sends notification of backup tunnel path quality (e.g., with one or more metrics) to the head-end node. The head-end node then analyzes the path quality metrics of the backup tunnel to determine whether to utilize the backup tunnel or reestablish a new primary tunnel.