摘要:
A proxy server and a hierarchical network system and a distributed workload management method. According to one embodiment of this disclosure, the proxy server includes: a rate controller configured to, based on measured request-related information and service quality parameters relative to service levels of requests, periodically determine a dispatch rate for requests of each service level, wherein the sum of the dispatch rate for respective service levels is less than or equal to a predetermined rate; and a request dispatcher configured to dispatch the requests of the corresponding service level in accordance with the dispatch rate determined by the rate controller. One aspect of the disclosure realizes a low overhead, highly scalable, simple and efficient workload management system to achieve QoS assurance and overload protection.
摘要:
A proxy server and a hierarchical network system and a distributed workload management method. According to one embodiment of this disclosure, the proxy server includes: a rate controller configured to, based on measured request-related information and service quality parameters relative to service levels of requests, periodically determine a dispatch rate for requests of each service level, wherein the sum of the dispatch rate for respective service levels is less than or equal to a predetermined rate; and a request dispatcher configured to dispatch the requests of the corresponding service level in accordance with the dispatch rate determined by the rate controller. One aspect of the disclosure realizes a low overhead, highly scalable, simple and efficient workload management system to achieve QoS assurance and overload protection.
摘要:
A proxy server and a hierarchical network system and a distributed workload management method. According to one embodiment of this disclosure, the proxy server includes: a rate controller configured to, based on measured request-related information and service quality parameters relative to service levels of requests, periodically determine a dispatch rate for requests of each service level, wherein the sum of the dispatch rate for respective service levels is less than or equal to a predetermined rate; and a request dispatcher configured to dispatch the requests of the corresponding service level in accordance with the dispatch rate determined by the rate controller. One aspect of the disclosure realizes a low overhead, highly scalable, simple and efficient workload management system to achieve QoS assurance and overload protection.
摘要:
A proxy server and a hierarchical network system and a distributed workload management method. According to one embodiment of this disclosure, the proxy server includes: a rate controller configured to, based on measured request-related information and service quality parameters relative to service levels of requests, periodically determine a dispatch rate for requests of each service level, wherein the sum of the dispatch rate for respective service levels is less than or equal to a predetermined rate; and a request dispatcher configured to dispatch the requests of the corresponding service level in accordance with the dispatch rate determined by the rate controller. One aspect of the disclosure realizes a low overhead, highly scalable, simple and efficient workload management system to achieve QoS assurance and overload protection.
摘要:
A method and apparatus for improving SIP server performance is disclosed. The apparatus comprises an enqueuer for determining whether a request packet entering into the server is a new request or a retransmitted request and its retransmission times and for enqueuing the request packet into different queues based on results of the determining step and a dequeuer for dequeuing the packet in the queues for processing based on a scheduling policy. The apparatus may further include a policy controller for communicating with the server, enqueuer, dequeuer, queues and user, to dynamically and automatically set, or set based on the user's instructions, the scheduling policy, number of different queues, each queue's capacity, scheduling, etc. based on the network and/or server load and/or based on different server applications.
摘要:
A method and apparatus for improving SIP server performance is disclosed. The apparatus comprises an enqueuer for determining whether a request packet entering into the server is a new request or a retransmitted request and its retransmission times and for enqueuing the request packet into different queues based on results of the determining step and a dequeuer for dequeuing the packet in the queues for processing based on a scheduling policy. The apparatus may further include a policy controller for communicating with the server, enqueuer, dequeuer, queues and user, to dynamically and automatically set, or set based on the user's instructions, the scheduling policy, number of different queues, each queue's capacity, scheduling, etc. based on the network and/or server load and/or based on different server applications.
摘要:
A method and apparatus for power-efficiency management in a virtualized cluster system. The virtualized cluster system includes a front-end physical host and at least one back-end physical host, and each of the at least one back-end physical host comprises at least one virtual machine and a virtual machine manager. Flow characteristics of the virtualized cluster system are detected at a regular time cycle, then a power-efficiency management policy is generated for each of the at least one back-end physical host based on the detected flow characteristics, and finally the power-efficiency management policies are performed. The method can detect the real-time flow characteristics of the virtualized cluster system and make the power-efficiency management policies thereupon to control the power consumption of the system and perform admission control on the whole flow, thereby realizing optimal power saving while meeting the quality of service requirements.
摘要:
A method for power-efficiency management in a virtualized cluster system is disclosed, wherein the virtualized cluster system comprises a front-end physical host and at least one back-end physical host, and each of the at least one back-end physical host comprises at least one virtual machine and a virtual machine manager for managing the at least one virtual machine. In the method, flow characteristics of the virtualized cluster system are detected at a regular time cycle, then a power-efficiency management policy is generated for each of the at least one back-end physical host based on the detected flow characteristics, and finally the power-efficiency management policies are performed. The method can detect the real-time flow characteristics of the virtualized cluster system and make the power-efficiency management policies thereupon to control the power consumption of the system and perform admission control on the whole flow, thereby realizing optimal power saving while meeting the quality of service requirements, so that a virtualized cluster system with high power-efficiency can be provided.
摘要:
A method, system and program product for controlling memory overload for a computer system. The invention determines heap utilization of a server; determines a maximum session lifetime a configured percentile of at least one session; determines a traffic rate (comprised of an average traffic rate received from a proxy server and a variance of traffic rate received from a proxy server); and calculates a maximum traffic rate, wherein the maximum traffic rate determines the heap utilization at a maximum heap percentage.
摘要:
A method for determining a suspect memory leak, including: sampling the throughput and memory usage of an application server; based on the sampled throughput, monitoring whether the throughput decrease continually, and based on the sampled memory usage, monitoring whether the memory usage remains stable within a predefined range; in response to a continual decrease of the throughput and the memory usage remaining stable within the predetermined range, determining that the application server is suspected of having a memory leak. Using the solution of the present invention can free an administrator of the burden of artificially identifying suspect servers, can identify a suspect server during runtime and further determine whether it actually has a memory leak.