摘要:
A method and system for providing delay bound and prioritized packet dropping are disclosed. The system limits the size of a queue configured to deliver packets in FIFO order by a threshold based on a specified delay bound. Received packets are queued if the threshold is not exceeded. If the threshold is exceeded, a packet having a precedence level less than that of the precedence level of the received packet is dropped. If all packets in the queue have a precedence level greater than that of the packet received, then the received packet is dropped if the threshold is exceeded.
摘要:
A method and system for providing delay bound and prioritized packet dropping are disclosed. The system limits the size of a queue configured to deliver packets in FIFO order by a threshold based on a specified delay bound. Received packets are queued if the threshold is not exceeded. If the threshold is exceeded, a packet having a precedence level less than that of the precedence level of the received packet is dropped. If all packets in the queue have a precedence level greater than that of the packet received, then the received packet is dropped if the threshold is exceeded.
摘要:
Disclosed are, inter alia, methods, apparatus, data structures, computer-readable media, and mechanisms, which may include or be used with a hierarchy of schedules with propagation of minimum guaranteed scheduling rates among scheduling layers in a hierarchical schedule. The minimum guaranteed scheduling rate for a parent schedule entry is typically based on the summation of the minimum guaranteed scheduling rates of its immediate child schedule entries. This propagation of minimum rate scheduling guarantees for a class of traffic can be dynamic (e.g., based on the active traffic for this class of traffic, active services for this class of traffic), or statically configured. One embodiment also includes multiple scheduling lanes for scheduling items, such as, but not limited to packets or indications thereof, such that different categories of traffic (e.g., propagated minimum guaranteed scheduling rate, non-propagated minimum guaranteed scheduling rate, high priority, excess rate, etc.) of scheduled items can be propagated through the hierarchy of schedules accordingly without being blocked behind a lower priority or different type of traffic.
摘要:
Schedules may use burst tolerance values to adjust the scheduling in a time-based schedule, such as, but not limited to, adjusting for accumulated but not used bandwidth, and/or adjusting eligibility of schedule entries. A best schedule item associated with an eligible schedule entry of a schedule is identified. Whether or not a particular schedule entry is eligible is typically determined based on the relationship of an associated timestamp with a current scheduling time, such as its timestamp being less than or equal to the current time. A burst tolerance time bound might also be used to allow certain priorities and/or types of items to be considered eligible if even its timestamp exceeds the current time by an amount, but less than or equal to the burst tolerance time bound. When a schedule entry which has been dormant becomes active, its one or more timestamps are typically initialized, which may include setting at least one of these timestamps behind current time by a wakeup burst tolerance value to guarantee its immediate eligibility for one or more consecutive scheduling iterations.
摘要:
Eligible entries are scheduled using an approximated finish delay identified for an entry based on an associated speed group. One implementation maintains schedule entries, each respectively associated with a start time and a speed group. Each speed group is associated with an approximated finish delay. An approximated earliest finishing entry from the eligible schedule entries is determined that has an earliest approximated finish time, with the approximated finish time of an entry being determined based on the entry's start time and the approximated finish delay of the associated speed group. The scheduled action corresponding to the approximated earliest finishing entry is then typically performed. The action performed may, for example, correspond to the forwarding of one or more packets, an amount of processing associated with a process or thread, or any activity associated with an item.
摘要:
A hierarchical multi-rate multi-precedence policer is disclosed. The policer discards packets based on assigned precedence levels. When traffic exceeds an available service rate, the policer drops packets of lower precedence levels to make room for packets of higher precedence levels. In certain implementations, the policer also guarantees bandwidth to each level, thus preventing complete loss of lower precedence traffic when there is a large amount of higher precedence traffic.
摘要:
A hierarchical multi-rate multi-precedence policer is disclosed. The policer discards packets based on assigned precedence levels. When traffic exceeds an available service rate, the policer drops packets of lower precedence levels to make room for packets of higher precedence levels. In certain implementations, the policer also guarantees bandwidth to each level, thus preventing complete loss of lower precedence traffic when there is a large amount of higher precedence traffic.
摘要:
Priority propagation is achieved in the context of a rate-based scheduling hierarchy. Priority traffic is not delayed by non-priority traffic by more than the duration required for transmission of the maximum packet length at the physical interface speed. Multiple sibling priority levels are supported. To achieve these objectives, the scheduling hierarchy tree is divided into sub-trees corresponding to non-priority traffic and the different levels of priority. At each scheduling decision, a packet is selected from the highest priority non-empty sub-tree. Scheduling decisions within each sub-tree exploit the usual rate-based scheduling method but without priority propagation. When a packet from a priority sub-tree is chosen, scheduling state in the non-priority sub-tree is updated.
摘要:
In one embodiment, a primary tunnel is established from a head-end node to a destination along a path including one or more protected network elements for which a fast reroute path is available to pass traffic around the one or more network elements in the event of their failure. A first path quality measures path quality prior to failure of the one or more protected network elements. A second path quality measures path quality subsequent to failure of the one or more protected network elements, while the fast reroute path is being used to pass traffic of the primary tunnel. A determination is made whether to reestablish the primary tunnel over a new path that does not include the one or more failed protected network elements, or to continue to utilize the path with the fast reroute path, in response to a difference between the first path quality and the second path quality.
摘要:
In one embodiment, an apparatus generally comprises one or more input interfaces for receiving a plurality of flows, a plurality of output interfaces, and a processor operable to identify large flows and select one of the output interfaces for each of the large flows to load-balance the large flows over the output interfaces. The apparatus further includes memory for storing a list of the large flows, a pinning mechanism for pinning the large flows to the selected interfaces, and a load-balance mechanism for selecting one of the output interfaces for each of the remaining flows. A method for local placement of large flows to assist in load-balancing is also disclosed.