摘要:
Various embodiments are disclosed for techniques to perform channel access decisions (315) and to select a transmit queue. These decisions may be performed, for example, based upon the age (305) and number (310) of packets in a queue. These techniques may allow a node to improve the length of data bursts transmitted by the node, although the invention is not limited thereto.
摘要:
Provided are an HQoS scheduling method and device. A received uplink data packet is encapsulated and stored in a queue in uplink direction, and an uplink queue scheduling component is requested to perform scheduling. In this manner, HQoS scheduling in the uplink direction is implemented, and a personalized demand of a user can be met by scheduling uplink data, to carry out more flexible function customization. According to the method and device, the data packet may be further sent to a downlink direction after the HQoS scheduling in the uplink direction is completed, and the HQoS scheduling can be performed on the data in the downlink direction, so that the HQoS scheduling is respectively performed on the data in both the uplink direction and the downlink direction; in this manner, the real bidirectional HQoS scheduling control is implemented, and QoS of the user service can be guaranteed in both directions.
摘要:
The present invention provides a method and an apparatus for controlling a scheduling packet. The method is applied to an HFC network system. The method includes: determining, by a network device, a transmission bandwidth of a first scheduling packet, where the first scheduling packet includes an IE used to carry resource allocation information for a first uplink period, and the resource allocation information for the first uplink period is used to indicate a transmission resource to be used by user equipment to send uplink data in the first uplink period; determining a target quantity according to a first control threshold when the transmission bandwidth of the first scheduling packet is greater than or equal to the first control threshold, where the target quantity is less than or equal to a quantity of IEs included in the first scheduling packet; and generating a second scheduling packet according to the target quantity, where a quantity of IEs included in the second scheduling packet is less than the target quantity, the second scheduling packet includes an IE used to carry resource allocation information for a second uplink period, and the second uplink period follows the first uplink period. Therefore, downlink data transmission is less affected by a scheduling packet, downlink bandwidth usage is improved, and system performance is improved.
摘要:
Micro-schedulers control bandwidth allocation for clients, each client subscribing to a respective predefined portion of bandwidth of an outgoing communication link. A macro-scheduler controls the micro-schedulers, by allocating the respective subscribed portion of bandwidth associated with each respective client that is active, by a predefined first deadline, with residual bandwidth that is unused by the respective clients being shared proportionately among respective active clients by a predefined second deadline, while minimizing coordination among micro-schedulers by the macro-scheduler periodically adjusting respective bandwidth allocations to each micro-scheduler.
摘要:
The present invention is directed to a method and apparatus for scheduling a resource (35) to meet quality of service guarantees. In one embodiment of three levels of priority, if a channel of a first priority level (15) is within its bandwidth allocation, then a request is issued from that channel. If there are no requests in channels at the first priority level that are within the allocation, requests from channels at the second priority level (20) that are within their bandwidth allocation are chosen. If there are no requests of this type, requests from channels at the third priority level (25) or requests from channels at the first and second levels that are outside of their bandwidth allocation are issued. The system may be implemented using rate-based scheduling.
摘要:
A device provides a flow table. The device receives a data unit, determines a data flow associated with the data unit, determines whether the flow table includes an entry corresponding to the data flow, determines a current utilization of a group of output ports of the device, selects an output port, of the group of output ports, for the data flow based on the current utilization of the group of output ports when the flow table does not store an entry corresponding to the data flow, and stores the data unit in a queue associated with the selected output port.
摘要:
A method for serving an aggregate flow in a communication network node includes a plurality of individual flows. The method includes identifying in the aggregate flow, based on serving resources allocated to the network node, individual flows that may be served without substantial detriment to perceived performance, and serving the identified individual flows with priority with respect to the remaining individual flows in the aggregate flow. The method allows the presence of individual flows that may not be served without substantial detriment to perceived performance due to shortage of serving resources to be notified to an external control entity.
摘要:
A packet transfer device that can be easily realized even when a number of input ports is large. Each input buffer temporarily stores entered packets class by class, and outputs packets of a selected class specified by the control unit, while the control unit determines the selected class of packets to be outputted from the input buffers according to a packet storage state in the packet storage units of the input buffers as a whole for each class. Each input buffer can temporarily store entered packets while selecting packets to be outputted at a next phase, and the control unit can specify packets to be selected in the input buffers according to an output state of packets previously selected in the input buffers as a whole. Packets stored in the buffer can be managed in terms of a plurality of groups, and each packet entered at the buffer can be distributed into a plurality of groups so that packets are distributed fairly among flows. The packets belonging to one of a plurality of groups are then outputted from the buffer toward the output port. A packet transfer at the buffer is controlled by issuing a packet transfer command according to a log of packet transfer commands with respect to the buffer and a packet storage state of the buffer.