Abstract:
An appliance for controlling data transmission is described. The appliance includes a packet engine configured to acquire data regarding a flow of first data packets over a link and to determine transport communication protocol (TCP) characteristics for the flow. The appliance also includes a data transmission controller configured to receive second data packets, determine a rate of transmission based on the TCP characteristics, and determine, based on one or more criteria, whether to use a rate-based data transmission control to control a transmission of the second data packets. The data transmission controller is also configured to, responsive to determining that a rate-based data transmission control is to be used to control a transmission of the second data packets, cause the packet engine to transmit the second data packets in groups, wherein transmission times of each group of second data packets are determined based on the rate of transmission.
Abstract:
An apparatus and method of providing improved throughput on delay-based congestions comprising a packet engine and a delay-based congestion controller are provided. The packet engine detecting a delay jitter that is caused by a layer 2 retransmission of a data packet, is configured to measure a round trip time (RTT) value. The delay-based congestion controller is configured to receive the RTT value and to determine a smoothed RTT (SRTT) value using the RTT value and one or more moving average functions. The delay-based congestion controller is also configured to, if the SRTT value is smaller than a set minimum SRTT value, assign the SRTT value to the set minimum SRTT value. The delay-based congestion controller is further configured to, if the SRTT value is larger than a set maximum SRTT value, assign the SRTT value to the set maximum SRTT value.
Abstract:
The embodiments are directed to methods and appliances for scheduling a packet transmission. The methods and appliances can assign received data packets or a representation of data packets to one or more connection nodes of a classification tree having a link node and first and second intermediary nodes associated with the link node via one or more semi-sorted queues, wherein the one or more connection nodes correspond with the first intermediary node. The methods and appliances can process the one or more connection nodes using a credit-based round robin queue. The methods and appliances can authorize the sending of the received data packets based on the processing.
Abstract:
Described embodiments improve the performance of a computer network via selectively forwarding packets to bypass quality of service (QoS) processing, avoiding processing delays during critical periods of high demand, increasing throughput and efficiency may be increased by sacrificing a small amount of QoS accuracy. QoS processing may be applied to a subset of packets of a flow or connection, referred to herein as “lazy” processing or lazy byte batching. Packets that bypass QoS processing may be immediately forwarded with the same QoS settings as packets of the flow for which QoS processing is applied, resulting in tremendous overhead savings with only minimal decline in accuracy. In case of backlog, packets may be collected together into an aggregated or ‘uber’ packet, with QoS processing applied based on a virtual size of the aggregated packet.
Abstract:
A method comprising: receiving, by a first network packet scheduler, from each other network packet scheduler of a plurality of network packet schedulers, a virtual packet for each traffic class of a plurality of traffic classes defining relative transmission priority of network packets; receiving, by the first network packet scheduler, a network packet of a first traffic class of the plurality of traffic classes; transmitting, by the first network packet scheduler, each virtual packet into a virtual connection of a plurality of virtual connections created for each traffic class; scheduling, by the first network packet scheduler, a network packet or a virtual packet as a next packet in a buffer for transmission; determining, by the first network packet scheduler, that the next packet in the buffer is a virtual packet; and discarding, by the first network packet scheduler, the virtual packet, responsive to the determination that the next packet in the buffer is a virtual packet.
Abstract:
Described embodiments improve the performance of a computer network via selectively forwarding packets to bypass quality of service (QoS) processing, avoiding processing delays during critical periods of high demand, increasing throughput and efficiency may be increased by sacrificing a small amount of QoS accuracy. QoS processing may be applied to a subset of packets of a flow or connection, referred to herein as “lazy” processing or lazy byte batching. Packets that bypass QoS processing may be immediately forwarded with the same QoS settings as packets of the flow for which QoS processing is applied, resulting in tremendous overhead savings with only minimal decline in accuracy. In case of backlog, packets may be collected together into an aggregated or ‘uber’ packet, with QoS processing applied based on a virtual size of the aggregated packet.
Abstract:
A system and method is provided for scheduling data packets. The system includes one or more packet engines configured to provide one or more congestion indications for a plurality of connections of a communication link. The system also includes a packet scheduler configured to receive the one or more congestion indications, estimate a link rate of the communication link using the one or more congestion indications and classification information, and schedule the data packets for transmission via the plurality of connections using the estimated link rate and the classification information.
Abstract:
A method comprising: receiving, by a first network packet scheduler, from each other network packet scheduler of a plurality of network packet schedulers, a virtual packet for each traffic class of a plurality of traffic classes defining relative transmission priority of network packets; receiving, by the first network packet scheduler, a network packet of a first traffic class of the plurality of traffic classes; transmitting, by the first network packet scheduler, each virtual packet into a virtual connection of a plurality of virtual connections created for each traffic class; scheduling, by the first network packet scheduler, a network packet or a virtual packet as a next packet in a buffer for transmission; determining, by the first network packet scheduler, that the next packet in the buffer is a virtual packet; and discarding, by the first network packet scheduler, the virtual packet, responsive to the determination that the next packet in the buffer is a virtual packet.
Abstract:
Described embodiments improve the performance of a computer network via selectively forwarding packets to bypass quality of service (QoS) processing, avoiding processing delays during critical periods of high demand, increasing throughput and efficiency may be increased by sacrificing a small amount of QoS accuracy. QoS processing may be applied to a subset of packets of a flow or connection, referred to herein as “lazy” processing or lazy byte batching. Packets that bypass QoS processing may be immediately forwarded with the same QoS settings as packets of the flow for which QoS processing is applied, resulting in tremendous overhead savings with only minimal decline in accuracy. In case of backlog, packets may be collected together into an aggregated or ‘uber’ packet, with QoS processing applied based on a virtual size of the aggregated packet.
Abstract:
Described embodiments provide systems and methods for CPU load and priority based early drop packet processing. A device can establish a priority level for each traffic class of a plurality of traffic classes. The device can receive a plurality of packets. The device can determine a processing level of one or more processors of the device prior to processing the plurality of packets. The device can select one or more packets of the plurality of packets to drop responsive to the priority level of one or more traffic classes associated with the one or more packets and the processing level of the one or more processors.