摘要:
An apparatus, system, and method are provided for modeling and analyzing a plurality of computing workloads. The apparatus, system, and method include a data collection module for gathering performance data associated with operation of the computer system. A modeling module executes a plurality of models in parallel, in series, or according to a hierarchical relationship. A data analysis module presents analysis data compiled from the modeling module to a user, typically in the form of a graph. And finally, a framework manages the data collection module, the modeling module, and the data analysis module according to a predefined data and model flow.
摘要:
Provided are a method, system, and program for distributing application transactions among work servers. Application transaction rates are determined for a plurality of applications supplying transactions to process. For each application, available partitions in at least one server are assigned to process the application transactions based on partition transaction rates of partitions in the servers. For each application, a determination is made of weights for each server including partitions assigned to the application based on a number of partitions in the server assigned to the application. The determined weights for each application are used to distribute application transactions among the servers including partitions assigned to the application.
摘要:
The present invention discloses a method, system and article of manufacture for autonomic identification of an optimum hardware configuration for a Web infrastructure. A plurality of performance objectives and a plurality of best practice rules for the Web infrastructure are established first. Then, a search space and a current configuration performance index within the search space are established. Next, a database of available hardware models is searched for a best-fit configuration based on the established plurality of best practice rules and the established current configuration performance index. The performance data of the found best-fit configuration is calculated using a performance simulator and then compared to the established plurality of performance objectives. If the calculated performance data meet the established plurality of performance objectives, then the best-fit configuration is designated as the optimum hardware configuration. Otherwise, the search space is narrowed and searching is continued until such optimum hardware configuration is found.
摘要:
Techniques are provided for automating allocation of resources based on business decisions. An impact of a business decision is quantified in terms of information technology (IT) metrics. The resources that may be needed to address the impact are estimated. The estimated resources are provisioned.
摘要:
A server allocation controller provides an improved distributed data processing system for facilitating dynamic allocation of computing resources. The server allocation controller supports transaction and parallel services across multiple data centers enabling dynamic allocation of computing resources based on the current workload and service level agreements. The server allocation controller provides a method for dynamic re-partitioning of the workload to handle workload surges. Computing resources are dynamically assigned among transaction and parallel application classes, based on the current and predicted workload. Based on a service level agreement, the server allocation controller monitors and predicts the load on the system. If the current or predicted load cannot be handled with the current system configuration the server allocation controller determines additional resources needed to handle the current or predicted workload. The server cluster is reconfigured to meet the service level agreement.
摘要:
Automated or autonomic techniques for managing deployment of one or more resources in a computing environment based on varying workload levels. The automated techniques may comprise predicting a future workload level based on data associated with the computing environment. Then, an estimation is performed to determine whether a current resource deployment is insufficient, sufficient, or overly sufficient to satisfy the future workload level. Then, one or more actions are caused to be taken when the current resource deployment is estimated to be insufficient or overly sufficient to satisfy the future workload level. Actions may comprise resource provisioning, resource tuning and/or admission control.
摘要:
Techniques are provided for allocating resources. Performance metrics for a transaction are received. It is determined whether one or more service level objectives are being violated based on the received performance metrics. In response to determining that the one or more service level objectives are being violated, additional resources are allocated to the transaction. In response to allocating the additional resources, a resource allocation event is published.
摘要:
An on-demand manager provides an improved distributed data processing system for facilitating dynamic allocation of computing resources among multiple domains based on a current workload and service level agreements. Based on a service level agreement, the on-demand manager monitors and predicts the load on the system. If the current or predicted load cannot be handled with the current system configuration, the on-demand manager determines additional resources needed to handle the workload. If the service level agreement violations cannot be handled by reconfiguring resources at a domain, the on-demand manager sends a resource request to other domains. These other domains analyze their own commitments and may accept the resource request, reject the request, or counter-propose with an offer of resources and a corresponding service level agreement. Once the requesting domain has acquired resources, workload load balancers are reconfigured to allocate some of the workload from the requesting site to the acquired remote resources.
摘要:
An apparatus and method are provided for modeling queuing systems with highly variable traffic arrival rates. The apparatus and method include a means to associate a value with a pattern of highly variable arrival rates that is simple and intuitive, and a means to accurately model queuing delays in systems that are characterized by bursts of arrival activity. The queuing delay is determined by a sum of queuing delays after first applying a weighting factor to the queuing delay based upon a random arrival rate, and a different weighting factor to the queuing delay based upon a bursty variable arrival rate. The weighting factors are variants of the server utilization. The model facilitates specification of server characteristics and configurations to meet response time metrics.
摘要:
An apparatus, system, and method are disclosed for provisioning database resource within a grid database system. The federation apparatus includes an analysis module and a provision module. The analysis module analyzes a data query stream from an application to a database instance and determines if the data query stream exhibits a predetermined performance attribute. The provision module provisions a database resource in response to a determination that the data query stream exhibits the predetermined performance attribute. The provisioned database resource may be a database instance or a cache. The provisioning of the new database resource advantageously is substantially transparent to a client on the database system.