摘要:
The present invention discloses a method of improving the performance of a server enabled to permit connections to clients to persist for a duration equal to a timer value, such as Web servers utilizing HTTP/1.1. In accordance with an embodiment of the present invention, the server estimates the load on the server and uses the estimate to modify the timer value. The timer value can be chosen to balance the need to increase the throughput as seen by the clients and the server need to service the largest possible number of clients without running out of resources. The timer value can be set to a longer value when the server load is light and a shorter value when the server load is heavy. In a preferred embodiment of the present invention, the server dynamically selects the largest timer that guarantees that the server does not run out of resources under the current measured load.
摘要:
The present invention provides improved quality of service through data transmission rate control in a network. Data rate control may be in the downlink or uplink direction and may be statically or dynamically configured. Rate control may be implemented at varying points in the network including but not limited to at the wireless host, at the access point, at a separate device such as a server or at a separate location within the network. In one example of the present invention, a rate enforcement function is provided for identifying data packets to be enforced or identifying mapping between each packet and corresponding access point. Also, a rate decision function is also provided for determining the data rate to be enforced for each of the access points or each of the wireless hosts.
摘要:
Aspects of the invention provide a method and system for managing or coordinating data transmission in a Local Area Network (LAN) such that Quality of Service (QoS) concerns are met. A LAN resource manager (LRM) is provided for managing the LAN resources by providing solutions for providing users with several levels of QoS. Once the LRM admits a user at a certain QoS level, the level is assured within the LAN for as long as the user is in the LAN. A user may submit a request to transmit data to the LRM. The LRM may determine if time allocation is possible and allocate the time slots for data transmission. The LRM may send time slot allocation information to an Access Server in a LAN, which may inform the user of the time slot allocation and prepare a queue according to the slot allocation information.
摘要:
One or more system and/or method of dynamically setting values of Channel Access Parameters employing a Load Supervision Manager entity, a Quality of Service Parameters Manager entity, and an Access Point. The entities work with the Access Point and continuously monitors network loading conditions and setting Channel Access Parameters values in response to network loading conditions. The Load Supervision Manager is a controlling and/or supervisory entity that sits at a network level receives information from the QoS Parameters Manager which sits at a subnet level, and judges prevailing loading conditions. The prevailing loading conditions include such factors as the number of Mobile Nodes and the applications or ACs they are running on in each subnet. The QoS Parameters Manager assesses the possible near future loading condition in each subnet including monitoring the hand-off Mobile Nodes and issues directives to QoS Parameters Managers.
摘要:
One or more system and/or method of dynamically setting values of Channel Access Parameters employing a Load Supervision Manager entity, a Quality of Service Parameters Manager entity, and an Access Point. The entities work with the Access Point and continuously monitors network loading conditions and setting Channel Access Parameters values in response to network loading conditions. The Load Supervision Manager is a controlling and/or supervisory entity that sits at a network level receives information from the QoS Parameters Manager which sits at a subnet level, and judges prevailing loading conditions. The prevailing loading conditions include such factors as the number of Mobile Nodes and the applications or ACs they are running on in each subnet. The QoS Parameters Manager assesses the possible near future loading condition in each subnet including monitoring the hand-off Mobile Nodes and issues directives to QoS Parameters Managers.
摘要:
An access point station responds to a request from a user station for contention-based access of a new traffic flow to a wireless transmission medium by applying a model of the wireless local area network to estimate delay that data packets will experience when delivered through the wireless network, in order to admit the new flow upon determining that admission will not violate quality of service requirements of neither the new flow nor of already admitted flows. For example, the access point station applies the model by determining an average packet inter-arrival rate, solving a system of nonlinear equations to determine probabilities of successful transmission, applying network stability conditions, computing an upper bound on queuing delay for the packets, computing a service delay budget for the packets, and computing an expected fraction of missed packets from the service delay budget.
摘要:
An access point station responds to a request from a user station for contention-based access of a new traffic flow to a wireless transmission medium by applying a model of the wireless local area network to estimate delay that data packets will experience when delivered through the wireless network, in order to admit the new flow upon determining that admission will not violate quality of service requirements of neither the new flow nor of already admitted flows. For example, the access point station applies the model by determining an average packet inter-arrival rate, solving a system of nonlinear equations to determine probabilities of successful transmission, applying network stability conditions, computing an upper bound on queuing delay for the packets, computing a service delay budget for the packets, and computing an expected fraction of missed packets from the service delay budget.
摘要:
Traffic flows of data packets from respective packet queues in wireless stations to a shared transmission medium of a wireless network are scheduled in accordance with Hybrid Controlled Channel Access (HCCA) and Enhanced Distributed Channel Access (EDCA). HCCA is applied by eliminating from consideration for HCCA access flows for which the sum of a desired minimum age of an oldest data packet in the respective packet queue and the time of creation of the oldest data packet is greater than the present time. For flows that are not eliminated from consideration, HCCA access is granted to the flow having a smallest sum of the desired maximum age of the oldest data packet and the time of creation of the oldest data packet. When all traffic flows are eliminated from consideration for HCCA access, EDCA is applied so that traffic flows compete for access to the medium.
摘要:
System, method and program storage device for use of preference list to manage network load in a multi-network environment are provided. In one aspect, a preference list is generated that includes one or more of networks for connecting a device in a multi-network environment. The preference list is adjusted to take into account one or more policy factors and transmitted to the device for the device to use for selecting a network for communicating.
摘要:
Traffic flows of data packets from respective packet queues in wireless stations to a shared transmission medium of a wireless network are scheduled in accordance with Hybrid Controlled Channel Access (HCCA) and Enhanced Distributed Channel Access (EDCA). HCCA is applied by eliminating from consideration for HCCA access flows for which the sum of a desired minimum age of an oldest data packet in the respective packet queue and the time of creation of the oldest data packet is greater than the present time. For flows that are not eliminated from consideration, HCCA access is granted to the flow having a smallest sum of the desired maximum age of the oldest data packet and the time of creation of the oldest data packet. When all traffic flows are eliminated from consideration for HCCA access, EDCA is applied so that traffic flows compete for access to the medium.