Abstract:
Differentiated services are provided through service level agreements (SLAs) between access nodes and some of the clients using a wireless access network. Client devices include internal devices that are compliant with service-related specifications published by the access nodes. Client devices also may include non-compliant external and legacy devices, as well as outside interferers. The access nodes control target SLAs for each client device. The access nodes and the internal client devices perform rate limiting to ensure that a device's target SLA is adhered to. The service-related specifications include schedules to ensure preferential access for preferred internal client devices. The internal client devices send usage and bandwidth availability feedback to the access node they are associated with, enabling the access node to come up with better schedules for meeting the preferred internal devices' SLAs in view of the network conditions reported via the feedback.
Abstract:
Host machines and other devices performing synchronized operations can be dispersed across multiple racks in a data center to provide additional buffer capacity and to reduce the likelihood of congestion. The level of dispersion can depend on factors such as the level of oversubscription, as it can be undesirable in a highly connected network to push excessive host traffic into the aggregation fabric. As oversubscription levels increase, the amount of dispersion can be reduced and two or more host machines can be clustered on a given rack, or otherwise connected through the same edge switch. By clustering a portion of the machines, some of the host traffic can be redirected by the respective edge switch without entering the aggregation fabric. When provisioning hosts for a customer, application, or synchronized operation, for example, the levels of clustering and dispersion can be balanced to minimize the likelihood for congestion throughout the network.
Abstract:
Embodiments of the present disclosure describe techniques for handling dual priority for a machine-to-machine device in a wireless communication network. The device may include computer-readable media having instructions and one or more processors coupled with the computer-readable media and configured to execute the instructions to configure, as a default configuration, the device with a first priority level for machine-type communications, receive a notification from an application associated with the device, the notification indicating that the application generated a communication to a network controller, the communication being associated with a second priority level that is higher than the first priority level, and in response to the notification, configure, as an override configuration, the device with the second priority level for machine-type communications. If a backoff timer is running for low priority application, and a current communication is not for a low priority, the communication is allowed to proceed.
Abstract:
A method for obtaining, by a first node, information relating to a congestion on a path allowing the routing of packets from said first node destined for a second node in a packet communications network, said congestion potentially degrading said routing.
Abstract:
Quality of a connection between a terminal and a gateway is monitored in the gateway. The gateway informs a core network element handling signaling relating to the connection about the quality of the connection. The core network element triggers an access network control element to inform the terminal about the quality of the connection. The gateway or the core network element may determine when to trigger the access network control element. Upon receiving triggering information from the core network element, the access network control element informs the terminal about the quality of the connection for indicating changes in the quality of connection between the terminal and the gateway.
Abstract:
A system and method which improve the performance of a wireless transmission system by intelligent use of the control of the flow of data between a radio network controller (RNC) and a Node B. The system monitors certain criteria and, if necessary, adaptively increases or decreases the data flow between the RNC and the Node B. This improves the performance of the transmission system by allowing retransmitted data, signaling procedures and other data to be successfully received at a faster rate, by minimizing the amount of data buffered in the Node B. Flow control is exerted to reduce buffering in the Node B upon degradation of channel qualities, and prior to a High Speed Downlink Shared Channel (HS-DSCH) handover.
Abstract:
Method for reporting bandwidth loss on a network link that couples a switch element to a network is provided. The method includes determining if credit is unavailable to transmit a packet and a packet is available at a switch port for transmission; determining bandwidth loss due to lack of credit; and reporting the bandwidth loss to a processor of the network switch. The switch element includes a processor for executing firmware code; a port for receiving and transmitting network packets; and a bandwidth loss logic that determines bandwidth loss if credit is unavailable to transmit a packet and the packet is available at the port; and reports the bandwidth loss to the processor.
Abstract:
Apparatus and methods for intelligent congestion feedback are disclosed. An example apparatus includes a data interface configured to receive data packets from a source endpoint via an intermediate node. The data packets include a field indicating whether data congestion for data being sent to the destination endpoint is occurring. The example apparatus also includes a timer. The example apparatus further includes a feedback loop interface configured to selectively enable a feedback loop to the source endpoint and to transmit congestion notification (CN) messages to the source endpoint over the feedback loop. Upon receiving a data packet indicating that congestion has occurred due to the data packets from the source endpoint to the destination endpoint, the destination endpoint is configured to set the timer to a preset time value; start the timer reverse counting from the preset time value to zero, enable the feedback loop and transmit the CN messages.
Abstract:
Aspects of the disclosure relate to measuring and managing data traffic in one or more networks. In some embodiments, a monitor may measure the traffic at one or more locations within the network(s) or devices associated therewith to determine whether the traffic exceeds a threshold. When the traffic exceeds the threshold, one or more actions may be taken, such as issuing or transmitting a command or directive. The command or directive may advise a device or an application to throttle or reduce an input or stimulus responsible for generating the traffic. In some embodiments, a throttling may be effectuated to reduce the data traffic.
Abstract:
Techniques for scheduling flows and links for transmission are described. Each link is an oriented source-destination pair and carries one or more flows. Each flow may be associated with throughput, delay, feedback (e.g., acknowledgments (ACKs)) and/or other requirements. A serving interval is determined for each flow based on the requirements for the flow. A serving interval is determined for each link based on the serving intervals for all of the flows sent on the link. Each link is scheduled for transmission at least once in each serving interval, if system resources are available, to ensure that the requirements for all flows sent on the link are met. The links are also scheduled in a manner to facilitate closed loop rate control. The links are further scheduled such that ACKs for one or more layers in a protocol stack are sent at sufficiently fast rates.