摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification (“BCN”) messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification (“BCN”) messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
Novel methods and devices are provided for AQM of input-buffered network devices. Preferred implementations of the invention control overall buffer occupancy while protecting uncongested individual VOQs. The probability of setting a “global drop flag” (which is not necessarily used to trigger packet drops, but may also be used to trigger other AQM responses) may depend, at least in part, on the lesser of a running average of buffer occupancy and instantaneous buffer occupancy. In some preferred embodiments, this probability also depends on the number of active VOQs. Moreover, a global drop flag is set in conjunction with a drop threshold M associated with the VOQs. Whether an AQM response is made may depend on whether a global drop flag has been set and whether a destination VOQ contains M or more packets. Different M values may be established for different classes of traffic, e.g., with higher M values for higher-priority traffic. AQM responses (e.g., to drop packets) may be taken more aggressively when there is a larger number of active VOQs.
摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification (“BCN”) messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
Novel methods and devices are provided for AQM of input-buffered network devices. Preferred implementations of the invention control overall buffer occupancy while protecting uncongested individual VOQs. The probability of setting a “global drop flag” (which is not necessarily used to trigger packet drops, but may also be used to trigger other AQM responses) may depend, at least in part, on the lesser of a running average of buffer occupancy and instantaneous buffer occupancy. In some preferred embodiments, this probability also depends on the number of active VOQs. Moreover, a global drop flag is set in conjunction with a drop threshold M associated with the VOQs. Whether an AQM response is made may depend on whether a global drop flag has been set and whether a destination VOQ contains M or more packets. Different M values may be established for different classes of traffic, e.g., with higher M values for higher-priority traffic. AQM responses (e.g., to drop packets) may be taken more aggressively when there is a larger number of active VOQs.
摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification (“BCN”) messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
In one embodiment, a method comprises the following steps: receiving a first set of inputs comprising a first plurality of entities and a first traffic behavior; determining a first region of a buffer corresponding to the first traffic behavior; assigning the first plurality of entities to the first region; determining hierarchical relationships between at least some of the first plurality of entities; determining a first shared buffer space of the first region; and assigning at least one threshold for each of the first plurality of entities. The threshold may comprise a maximum amount of the first shared buffer space that may be allocated to an entity. The method may also involve configuring a logic device to allocate the first shared buffer space dynamically according to the hierarchical relationships and the thresholds.
摘要:
In one embodiment, a method comprises the following steps: receiving a first set of inputs comprising a first plurality of entities and a first traffic behavior; determining a first region of a buffer corresponding to the first traffic behavior; assigning the first plurality of entities to the first region; determining hierarchical relationships between at least some of the first plurality of entities; determining a first shared buffer space of the first region; and assigning at least one threshold for each of the first plurality of entities. The threshold may comprise a maximum amount of the first shared buffer space that may be allocated to an entity. The method may also involve configuring a logic device to allocate the first shared buffer space dynamically according to the hierarchical relationships and the thresholds.
摘要:
The present invention provides methods and devices for implementing a Low Latency Ethernet (“LLE”) solution, also referred to herein as a Data Center Ethernet (“DCE”) solution, which simplifies the connectivity of data centers and provides a high bandwidth, low latency network for carrying Ethernet and storage traffic. Some aspects of the invention involve transforming FC frames into a format suitable for transport on an Ethernet. Some preferred implementations of the invention implement multiple virtual lanes (“VLs”) in a single physical connection of a data center or similar network. Some VLs are “drop” VLs, with Ethernet-like behavior, and others are “no-drop” lanes with FC-like behavior. Some preferred implementations of the invention provide guaranteed bandwidth based on credits and VL. Active buffer management allows for both high reliability and low latency while using small frame buffers. Preferably, the rules for active buffer management are different for drop and no drop VLs.
摘要:
The present invention provides methods and devices for implementing a Low Latency Ethernet (“LLE”) solution, also referred to herein as a Data Center Ethernet (“DCE”) solution, which simplifies the connectivity of data centers and provides a high bandwidth, low latency network for carrying Ethernet and storage traffic. Some aspects of the invention involve transforming FC frames into a format suitable for transport on an Ethernet. Some preferred implementations of the invention implement multiple virtual lanes (“VLs”) in a single physical connection of a data center or similar network. Some VLs are “drop” VLs, with Ethernet-like behavior, and others are “no-drop” lanes with FC-like behavior. Some preferred implementations of the invention provide guaranteed bandwidth based on credits and VL. Active buffer management allows for both high reliability and low latency while using small frame buffers. Preferably, the rules for active buffer management are different for drop and no drop VLs.