摘要:
Methods and systems for preemption in a network having a core device with at least one egress interface are disclosed. In one embodiment, the method includes performing flow-based hash utilizing a plurality of hash-buckets each set to a first state or a second state and computing a load based on a rate measurement that excludes flows which hash into hash-buckets having a state set to the second state. The computed load is compared to a preemption threshold and if the computed load exceeds the preemption threshold, the state of at least one of the hash-buckets is changed from the first state to the second state. An action, such as dropping all packets or marking all packets, is performed on flows hashing in a hash-bucket in the second state.
摘要:
A method and apparatus for setting admission and preemption thresholds in a computer network are disclosed. In one embodiment, a method includes receiving traffic information including a first bandwidth utilization on each link located between ingress nodes and egress nodes based on a traffic matrix with no failures at the nodes or the links, and a second bandwidth utilization on each of the links based on the traffic matrix with planned failures at one or more of the links or the nodes. A preemption-to-admission ratio is calculated based on the first and second bandwidth utilizations on the links. An admission threshold is calculated at one of the links based on the second bandwidth utilization on the link and the preemption-to-admission ratio. At least one of the preemption-to-admission ratio and admission threshold is transmitted to a network device for use in flow admission.
摘要:
In one embodiment, an apparatus generally comprises one or more input interfaces for receiving a plurality of flows, a plurality of output interfaces, and a processor operable to identify large flows and select one of the output interfaces for each of the large flows to load-balance the large flows over the output interfaces. The apparatus further includes memory for storing a list of the large flows, a pinning mechanism for pinning the large flows to the selected interfaces, and a load-balance mechanism for selecting one of the output interfaces for each of the remaining flows. A method for local placement of large flows to assist in load-balancing is also disclosed.
摘要:
Systems and methods for estimating aggregate bandwidths of primary traffic flows are provided. Tighter upper bounds on aggregate bandwidths for arbitrary combinations of primary traffic flows may be computed. These tighter bounds are highly useful in configuring backup tunnels to protect a node in the event of failure in that the total backup bandwidth burden on individual links may be determined more accurately to permit optimal use of available backup bandwidth capacity.
摘要:
Load balancing enables the use of linear programming techniques to reduce the complexity of computing backup tunnel placement for guaranteed bandwidth protection. The ability to load balance among multiple backup tunnels transforms the placement problem into one that may be characterized as a series of linear constraints usable as input to a linear programming procedure such as the simplex method. Each node may compute its own backup tunnels and signal the tunnels to its neighbors with zero bandwidth to allow implicit sharing of backup bandwidth.
摘要:
Load balancing among fast reroute backup tunnels in a label switched network is achieved. M backup tunnels may be used to protect N parallel paths. A single backup tunnel may protect multiple parallel paths, saving on utilization of network resources such as router state and signaling information. A single path may be protected by multiple backup tunnels, assuring that bandwidth guarantees are met under failure conditions even when no one backup tunnel with sufficient bandwidth may be found. A packing algorithm is used to associate individual label switched paths (LSPs) with individual backup tunnels. If an LSP cannot be assigned to a backup tunnel, it may be either rejected, or additional bandwidth is allocated to existing backup tunnels, or a new backup tunnel is established.
摘要:
A method of scheduling a plurality of data flows in a shared resource in a computer system, each of the data flows containing a plurality of data cells including the steps of providing a scheduler in the shared resource, initializing the scheduler to receive the plurality of data flows, receiving a first data flow in the scheduler, said first data flow having a first flow rate, receiving a second data flow in the scheduler, said second data flow having a second flow rate, scheduling, by the scheduler, the first data flow and the second data flow such that the first flow rate and the second flow rate are less than an available bandwidth in the shared resource and a relative error is minimized between an actual scheduling time and an ideal scheduling time on a per cell basis, and repeating the steps of receiving and scheduling.
摘要:
The rate based end system may provide feasible transmission rates for end source stations. As an extension to the rate based end to end system, there is disclosed a hybrid link by link flow control system. The link by link control system is built upon the end to end, rate based traffic control system. The link by link system utilizes bandwidth un accounted for by the end to end system. The link by link system uses feasible transmission rates obtained by the end to end system to determine the size of the buffers required for overbooking and for updating credit information to sustain the calculated rate.
摘要:
A novel scheduling method is provided which may be used for rate-based scheduling (e.g., for scheduling flows at some assigned rates in a computer network) or for weighted fair sharing of a common resource (e.g., scheduling weighted jobs in a processor). The method is based on hierarchical application of Relative Error (RE) scheduling. The present method of a Hierarchical RE Scheme (HRE) with complexity O(log(N)), where N is the maximum number of jobs supported by the scheduler, is provided.
摘要:
A computational method and apparatus allocates transmission rate to source end nodes, and both reduces the computational complexity, and reduces the state information which must be retained concerning each VC, without significantly degrading convergence properties for the network. Also, the computational method is useful with either interval based or proportional schemes of flow control. A plurality of virtual circuits is established between source end stations and destination end stations, the plurality of virtual circuits passing through an intermediate node. The source end stations transmit data packets at a plurality of discrete transmission rates. The intermediate node counts the number of virtual circuits using each of the discrete transmission rates. The intermediate node maintains an indication that a select virtual circuit has been counted in the step above, and does not count the virtual circuit more than once during a switch time interval. The intermediate node, responsive to counting the number of virtual circuits using each of the discrete transmission rates, calculates a rate allocation value for the plurality of virtual circuits, the calculation is done periodically during the switch time interval. The rate allocation value is written into the field of the data packet in order to signal to the source end station, and any intervening intermediate node, the rate allocation value calculated above.