摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification (“BCN”) messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification (“BCN”) messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification (“BCN”) messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification (“BCN”) messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
In one embodiment, apparatus and methods for apparatus and methods for fair bandwidth allocation are disclosed. In one embodiment, a method includes (i) determining a drop probability for each of a plurality of classes of packets being dropped or admitted to a queue, wherein each drop probability is based on a weighted fair bandwidth allocation process that is performed with respect to the plurality of classes and a plurality of packet arrival rates and predefined weights for such classes; and (ii) dropping a particular packet or admitting such particular packet to the queue based on the drop probability for such particular packet's class, wherein such dropping or admitting operation is further based on one or more drop precedence factors that are also determined periodically for each class if such one or more drop precedence factors are selected for such each class. In other embodiments, the invention pertains to an apparatus having one or more processors and one or more memory, wherein at least one of the processors and memory are adapted for performing the above described method operations.
摘要:
In one embodiment, apparatus and methods for apparatus and methods for fair bandwidth allocation are disclosed. In one embodiment, a method includes (i) determining a drop probability for each of a plurality of classes of packets being dropped or admitted to a queue, wherein each drop probability is based on a weighted fair bandwidth allocation process that is performed with respect to the plurality of classes and a plurality of packet arrival rates and predefined weights for such classes; and (ii) dropping a particular packet or admitting such particular packet to the queue based on the drop probability for such particular packet's class, wherein such dropping or admitting operation is further based on one or more drop precedence factors that are also determined periodically for each class if such one or more drop precedence factors are selected for such each class. In other embodiments, the invention pertains to an apparatus having one or more processors and one or more memory, wherein at least one of the processors and memory are adapted for performing the above described method operations.
摘要:
Techniques for improving the performance of flow control mechanisms such as Pause are provided. The techniques provide for maintaining a fair distribution of available bandwidth while also allowing for fewer packet drops, and maximizing link utilization, in a distributed system. For example, in one embodiment, techniques are provided for achieving a fair share allocation of an egress port's bandwidth across a plurality of ingress ports contending for the same egress port.
摘要:
Various improvements are provided for prior art policing methods, including token bucket methods and virtual time policing methods. Some preferred methods of the invention involve assigning a non-zero drop probability even when the packet would otherwise have been transmitted according to a prior art policing method. For example, a non-zero drop probability may be assigned even when there are sufficient tokens in a token bucket to allow transmission of the packet. A non-zero drop probability may be assigned, for example, when a token bucket level is at or below a predetermined threshold or according to a rate at which a token bucket is being emptied. Some implementations involve treating a token bucket as a virtual queue wherein the number of free elements in the virtual queue is proportional to the number of remaining tokens in the token bucket. Such implementations may involve predicting a future virtual queue size according to a previous virtual queue size and using this predicted value to calculate a drop probability.
摘要:
Media-aware and TCP-compatible bandwidth sharing may be provided. In various embodiments, a network node may periodically update a virtual congestion level for a transmission stream in a network. The transmission stream may comprise at least one video stream and at least one data stream. The network node may then calculate, based at least in part on the virtual congestion level, a random packet marking probability or a random packet drop probability. In turn, the network node may either drop or mark transmission packets according to the calculated marking and dropping probability. The network node may further calculate an optimal video transmission rate for the at least one video stream and adjust a video transmission rate for the at least one video stream accordingly. Rate-distortions parameters for the at least one video stream may influence the optimal video transmission rate calculation for the at least one video stream.
摘要:
Various improvements are provided for prior art policing methods, including token bucket methods and virtual time policing methods. Some preferred methods of the invention involve assigning a non-zero drop probability even when the packet would otherwise have been transmitted according to a prior art policing method. For example, a non-zero drop probability may be assigned even when there are sufficient tokens in a token bucket to allow transmission of the packet. A non-zero drop probability may be assigned, for example, when a token bucket level is at or below a predetermined threshold or according to a rate at which a token bucket is being emptied. Some implementations involve treating a token bucket as a virtual queue wherein the number of free elements in the virtual queue is proportional to the number of remaining tokens in the token bucket. Such implementations may involve predicting a future virtual queue size according to a previous virtual queue size and using this predicted value to calculate a drop probability.