摘要:
A method for queueing packets in a packet-switched network, the packets belonging to a number of packet flows, the packet flows being subject to transfer delay requirements, the method comprising obtaining a packet; identifying a packet flow associated with the obtained packet; obtaining transfer delay information representing transfer delay for the identified packet flow; determining a load indication representing at least one of a filling level and a sojourn time of a first queue of a plurality of queues comprising at least the first and a second queue both configured for buffering packets; and based on the obtained transfer delay information and the determined load indication, selecting a queue from the plurality of queues.
摘要:
According to an aspect the invention relates to a networking device (100) such as a router or switch for scheduling the transmission of packets over a network interface (130). The device comprises a plurality of queues (122-124, 125-127) configured to buffer packets of a first Quality of Service or QoS class (143, 144). Each queue is configured to buffer packets supporting a certain congestion control type. The device further comprises a scheduler (150) for retrieving these packets from the queues according to a scheduling policy and comprises a forwarding module (110) for forwarding the packets to the queues by inspecting the QoS class and the congestion control type in the packets.
摘要:
A method is provided for transporting data packets over a telecommunications transport network. The data packets are carried by a plurality of bearers, and are sent over the transport network from a serving node. Information is received relating to a current capacity of the transport network. Acurrent maximum total information rate for the serving node is dynamically adjusted based on information relating to a current capacity of the transport network. A current maximum information rate for each of the bearers is determined based on the current maximum total information rate. Bandwidth profiling is applied to the data packets of each of the bearers, independently of the other bearers, to identify the data packets of each of the bearers that are conformant with the determined current maximum information rate for the bearer. The data packets are forwarded for transport through the transport network. If there is insufficient bandwidth available in the transport network, data packets not identified by the profiling as being conformant are discarded.
摘要:
An input-buffered multipoint switch having input channels and output channels includes multilevel request buffers (122, 124, 126, and 128), a data path multiplexer (130), and a scheduler (132). The switch has a distinct multilevel request buffer associated with each input channel and each request buffer has multiple request registers (160, 162, 164, and 166) of a different request buffer priority. The request registers (160, 162, 164, and 166) store data cell transfer requests that have been assigned quality of service (QoS) priorities, where the QoS priorities are related to packet source, destination, and/or application type. The multilevel request registers (160, 162, 164, and 166) are linked in parallel to the scheduler (132) to allow arbitration among requests of different input channels and different request buffer priority levels. The preferred arbitration process involves generating QoS priority-specific masks that reflect the output channels required by higher QoS priority requests and arbitrating (256) among requests of the same QoS priority in QoS priority-specific multilevel schedulers. Sorting requests by QoS priority allows the switch to schedule a high throughput of packets while adhering to QoS requirements.
摘要:
A communications controller is provided. The communications controller includes a flow manager that classifies a packet flow serviced by more than one transmission points (TPs) as one of a plurality of slices in accordance with at least one of a nature of the packet flow, a load status of each of the plurality of slices, and feedback information provided by the more than one TPs, and alters a classification of the packet flow in accordance with the load status of each of the plurality of slices, and feedback information provided by the TPs served by the communications controller. The communications controller also includes a memory coupled to the flow manager, the memory stores a packet of the packet flow in one of a plurality of packet queues in accordance with the classification of the packet flow.
摘要:
A method in a network node relating to a process of controlling a data transfer related to video data of a video streaming service from a server to a wireless device is provided. The network node and wireless device operates in a wireless communications network. The network node determines a scheduling weight value for the wireless device to be used in the data transfer based on a target rate scheduling weight value and a proportional rate fair weight value. The network node then determines a size of data segment to be used in the data transfer based on at least part of the scheduling weight value. The network node further determines a pending data volume for the transferring of the video data to a play back buffer of the wireless device based on at least part of the scheduling weight value.
摘要:
A method in a network node relating to a process of controlling a data transfer related to video data of a video streaming service from a server to a wireless device is provided. The network node and wireless device operates in a wireless communications network. The network node determines a scheduling weight value for the wireless device to be used in the data transfer based on a target rate scheduling weight value and a proportional rate fair weight value. The network node then determines a size of data segment to be used in the data transfer based on at least part of the scheduling weight value. The network node further determines a pending data volume for the transferring of the video data to a play back buffer of the wireless device based on at least part of the scheduling weight value.
摘要:
A technique allows stations to utilize an equal share of resources (e.g., airtime or throughput). This prevents slow stations from consuming too many resources (e.g., using up too much air time). Fairness is ensured by selective dropping after a multicast packet is converted to unicast. This prevents slow stations from using more than their share of buffer resources. Multicast conversion aware back-pressure into the network layer can be used to prevent unnecessary dropping of packets after multicast to unicast (1:n) conversion by considering duplicated transmit buffers. This technique helps achieve airtime/resource fairness among stations.
摘要:
The present invention relates to a method and an arrangement for scheduling data packets each belonging to a particular traffic class associated with a certain quality of service (QoS) level and transmitted between a first communication network node and a second communication network node. Initially a token rate for assigning tokens to each traffic class is set and an incoming traffic rate of each traffic class is measured by counting a number of incoming data packets during a pre-determined period of time. Then, based on said measured incoming traffic rate said token rate is adjusted in order to obtain a fair scheduling of data packets belonging to different traffic classes.
摘要:
A communications controller is provided. The communications controller includes a flow manager that classifies a packet flow serviced by more than one transmission points (TPs) as one of a plurality of slices in accordance with at least one of a nature of the packet flow, a load status of each of the plurality of slices, and feedback information provided by the more than one TPs, and alters a classification of the packet flow in accordance with the load status of each of the plurality of slices, and feedback information provided by the TPs served by the communications controller. The communications controller also includes a memory coupled to the flow manager, the memory stores a packet of the packet flow in one of a plurality of packet queues in accordance with the classification of the packet flow.