摘要:
Novel methods and devices are provided for AQM of input-buffered network devices. Preferred implementations of the invention control overall buffer occupancy while protecting uncongested individual VOQs. The probability of setting a “global drop flag” (which is not necessarily used to trigger packet drops, but may also be used to trigger other AQM responses) may depend, at least in part, on the lesser of a running average of buffer occupancy and instantaneous buffer occupancy. In some preferred embodiments, this probability also depends on the number of active VOQs. Moreover, a global drop flag is set in conjunction with a drop threshold M associated with the VOQs. Whether an AQM response is made may depend on whether a global drop flag has been set and whether a destination VOQ contains M or more packets. Different M values may be established for different classes of traffic, e.g., with higher M values for higher-priority traffic. AQM responses (e.g., to drop packets) may be taken more aggressively when there is a larger number of active VOQs.
摘要:
Various improvements are provided for prior art policing methods, including token bucket methods and virtual time policing methods. Some preferred methods of the invention involve assigning a non-zero drop probability even when the packet would otherwise have been transmitted according to a prior art policing method. For example, a non-zero drop probability may be assigned even when there are sufficient tokens in a token bucket to allow transmission of the packet. A non-zero drop probability may be assigned, for example, when a token bucket level is at or below a predetermined threshold or according to a rate at which a token bucket is being emptied. Some implementations involve treating a token bucket as a virtual queue wherein the number of free elements in the virtual queue is proportional to the number of remaining tokens in the token bucket. Such implementations may involve predicting a future virtual queue size according to a previous virtual queue size and using this predicted value to calculate a drop probability.
摘要:
Class-based bandwidth partitioning of a sequence of packets of varying packet classes is performed, such as, but not limited to determining whether or not to admit a packet to a queue based on a probability corresponding to a class of packets associated with the packet, with this probability being based on measured arrival traffic and a fair share based on the length of the queue. Data path processing is performed on each packet to determine whether to admit or drop the packet, and to record the measured received traffic. Control path processing is periodically performed to update these probabilities based on determined arrival rates and fair shares for each class of packets. In this manner, a relatively small amount of processing and resources are required to partition bandwidth for a scalable number of classes of packets.
摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification (“BCN”) messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
The present invention provides methods and devices for implementing a Low Latency Ethernet (“LLE”) solution, also referred to herein as a Data Center Ethernet (“DCE”) solution, which simplifies the connectivity of data centers and provides a high bandwidth, low latency network for carrying Ethernet and storage traffic. Some aspects of the invention involve transforming FC frames into a format suitable for transport on an Ethernet. Some preferred implementations of the invention implement multiple virtual lanes (“VLs”) in a single physical connection of a data center or similar network. Some VLs are “drop” VLs, with Ethernet-like behavior, and others are “no-drop” lanes with FC-like behavior. Some preferred implementations of the invention provide guaranteed bandwidth based on credits and VL. Active buffer management allows for both high reliability and low latency while using small frame buffers. Preferably, the rules for active buffer management are different for drop and no drop VLs.
摘要:
A method is provided in one example and can include receiving a source data stream, generating a base layer sub-stream from the source data stream, and generating an enhancement layer sub-stream from the source data stream. The method further includes communicating the base layer sub-stream to a client device using a first communication protocol, and communicating the enhancement layer sub-stream to the client device using a second communication protocol. In a particular example, the one-to-many communication protocol is a multicast communication protocol and the second communication protocol is a unicast communication protocol. In another example, the base layer sub-stream is sent to the client device via a first network connection and the enhancement layer sub-stream is sent to the client device via a second network connection.
摘要:
A method in one example embodiment includes receiving a set of data in real time from a plurality of machine devices associated with at least one vehicle, providing a set of reference data corresponding to a machine device of the plurality of machine devices, comparing the set of data with the set of reference data, and detecting a deviation within the set of data from the set of reference data. The method further includes initiating an operation associated with the deviation. The set of reference data could be a trend of previous data received from the machine device or a common trend based on a previous set of data of the machine device. More specific embodiments include receiving a plurality of data containing the set of data from the plurality of machine devices and identifying a state of the machine device using the set of data.
摘要:
A method for communicating optically between nodes of an optical network, including forming, between a first node and a second node of the network, a set of lightpaths, each of the set of lightpaths having a respective configuration, and transferring communication traffic between the first and second nodes via the set of lightpaths. The method also includes forming a determination for the set of lightpaths that a communication traffic level associated therewith is less than a predetermined threshold, and in response to the determination, removing a lightpath having a given configuration from the set of lightpaths to form a reduced set of lightpaths. The method further includes transferring the communication traffic between the first and second nodes via the reduced set of lightpaths, while reducing a level of power consumption in the removed lightpath and while maintaining the given configuration of the removed lightpath.
摘要:
The present invention provides improved methods and devices for managing network congestion. Preferred implementations of the invention allow congestion to be pushed from congestion points in the core of a network to reaction points, which may be edge devices, host devices or components thereof. Preferably, rate limiters shape individual flows of the reaction points that are causing congestion. Parameters of these rate limiters are preferably tuned based on feedback from congestion points, e.g., in the form of backward congestion notification (“BCN”) messages. In some implementations, such BCN messages include congestion change information and at least one instantaneous measure of congestion. The instantaneous measure(s) of congestion may be relative to a threshold of a particular queue and/or relative to a threshold of a buffer that includes a plurality of queues.
摘要:
A method for communicating optically between nodes of an optical network, including forming, between a first node and a second node of the network, a set of lightpaths, each of the set of lightpaths having a respective configuration, and transferring communication traffic between the first and second nodes via the set of lightpaths. The method also includes forming a determination for the set of lightpaths that a communication traffic level associated therewith is less than a predetermined threshold, and in response to the determination, removing a lightpath having a given configuration from the set of lightpaths to form a reduced set of lightpaths. The method further includes transferring the communication traffic between the first and second nodes via the reduced set of lightpaths, while reducing a level of power consumption in the removed lightpath and while maintaining the given configuration of the removed lightpath.