摘要:
A method and apparatus for managing CPU resources of a logically partitioned computing environment without shared memory access. A logical partition needing additional resources sends a message requesting such resources to a central domain manager, which sends messages to other partitions in the same group requesting that they assess their ability to donate resources to the requesting partition. Upon receiving such assessment request, each logical partition assesses its ability to donate resources to the requesting partition and responds accordingly to the domain manager. If at least one partition responds that it can donate resources to the requesting partition, the domain manager sends a message to a selected donor partition requesting that it reconfigure itself to donate resources to the requesting partition. Upon receiving a notification from the donor partition that it has successfully reconfigured itself, the domain manager notifies the requesting partition, which reconfigures itself to accept the donated resources.
摘要:
A method and apparatus for obtaining a local performance measure for a particular server in a particular tier in a transaction environment in which transactions pass through multiple tiers with multiple servers at each tier. The contribution from the particular server to the total end-to-end response time for a set of transactions is scaled by the ratio of transactions passing through the particular tier to transactions passing through the particular server to obtain a scaled contribution from the particular tier. This is added to the contribution from outside the particular tier to obtain a modified total end-to-end response time from the perspective of the particular server. The modified total end-to-end response time is divided by the number of transactions in the set to obtain a modified average end-to-end response time from the perspective of the particular server, which is used to control allocation of resources to the server.
摘要:
Techniques for globally managing systems are provided. One or more measurable effects of at least one hypothetical action to achieve a management goal are determined at a first system manager. The one or more measurable effects are sent from the first system manager to a second system manager. At the second system manager, one or more procedural actions to achieve the management goal are determined in response to the one or more received measurable effects. The one or more procedural actions are executed to achieve the management goal.
摘要:
A method and apparatus for obtaining a local performance measure for a particular server in a particular tier in a transaction environment in which transactions pass through multiple tiers with multiple servers at each tier. The contribution from the particular server to the total end-to-end response time for a set of transactions is scaled by the ratio of transactions passing through the particular tier to transactions passing through the particular server to obtain a scaled contribution from the particular tier. This is added to the contribution from outside the particular tier to obtain a modified total end-to-end response time from the perspective of the particular server. The modified total end-to-end response time is divided by the number of transactions in the set to obtain a modified average end-to-end response time from the perspective of the particular server, which is used to control allocation of resources to the server.
摘要:
A method and apparatus for enforcing capacity limitations such as those imposed by software license agreements in an information handling system in which a physical machine is divided into a plurality of logical partitions, each of which is allocated a defined portion of processor resources by a logical partition manager. A software license manager specifies a maximum allowed consumption of processor resources by a program executing in one of the logical partitions. A workload manager also executing in the partition measures the actual consumption of processor resources by the logical partition over a specified averaging interval and compares it with the maximum allowed consumption. If the actual consumption exceeds the maximum allowed consumption, the workload manager calculates a capping pattern and interacts with the logical partition manager to cap the actual consumption of processor resources by the partition in accordance with the calculated capping pattern. To provide additional capping flexibility, partitions are assigned phantom weights that the logical partition manager adds to the total partition weight to determine whether the partition has exceeded its allowed share of processor resources for capping purposes. The logical partition thus becomes a “container” for the licensed program with an enforced processing capacity less than that of the entire machine.
摘要:
Allocation of shareable resources of a computing environment are dynamically adjusted to balance the workload of that environment. Workload is managed across two or more partitions of a plurality of partitions of the computing environment. The managing includes dynamically adjusting allocation of a shareable resource of at least one partition of the two or more partitions in order to balance workload goals of the two or more partitions.
摘要:
According to one embodiment of the present invention, a system, method and computer program product is provided for integrating an external workload manager with a database system. The method according to one embodiment includes a method comprises: receiving a request in a database component, the request including a cross component token; starting a new unit of work in workload management software in the database component, in response to the request; determining, from a cross component workload management unit, a transaction class and a synchronization code using the database component; finding an internal workload in the workload management software that matches the transaction class and the synchronization code of the cross component workload management unit; and using the matching internal workload for the new unit of work.
摘要:
In a computing system having swappable and non-swappable address spaces, wherein the computing system includes an operating system that includes a Real Storage Manager (RSM), a Systems Resource Manager (SRM) and a Region Control Task (RCT), a method for recovering swappable fixed non-preferred memory is provided which includes receiving a request from the operating system to configure an area of real memory to create an intercepted swappable address space, wherein the intercepted swappable address space includes a flagged fixed frame element identified for configuration, examining the intercepted swappable address space so as to determine if the intercepted swappable address space will remain swappable, requesting the SRM to coordinate the swapping process, quiescing the intercepted address space, generating a first return code responsive to the intercepted swappable address space remaining swappable, communicating the first return code to the RCT so as to cause the RCT to respond to the first return code, instructing the RSM to proceed based on the first return code, examining the intercepted swappable address space so as to identify the flagged frame elements, exchanging the flagged frame elements with unflagged frame elements, updating dynamic address translation tables, and returning a performance code to the RCT so as to indicate recovery success or recovery failure. A method for recovering swappable fixed non-preferred memory where the originally swappable address space has been converted into non-swappable address space is also provided.
摘要:
An impact of configuration changes on controllers is projected. This projection quantifies the impact for each controller affected by the change, such that it is known by a quantifiable value how much the change impacts the controller. In order to project the impact, a projected I/O velocity of the controller is determined.
摘要:
A method and apparatus for controlling the number of servers in a multisystem cluster. Incoming work requests are organized into service classes, each of which has a queue serviced by servers across the cluster. Each service class has defined for it a local performance index for each particular system of the cluster and a multisystem performance index for the cluster as a whole. Each system selects one service class as a donor class for donating system resources and another service class as a receiver class for receiving system resources, based upon how well the service classes are meeting their goals. Each system then determines the resource bottleneck causing the receiver class to miss its goals. If the resource bottleneck is the number of servers, each system determines whether and how many servers should be added to the receiver class, based upon whether the positive effect of adding such servers on the performance index for the receiver class outweighs the negative effect of adding such servers on the performance measure for the donor class. If a system determines that servers should be added to the receiver class, it then determines the system in the cluster to which the servers should be added, based upon the effect on other work on that system. To make this latter determination, each system first determines whether another system has enough idle capacity and, if so, lets that system add servers. If no system has sufficient idle capacity, each system then determines whether the local donor class will miss its goals if servers are started locally. It not, the servers are started on the local system. Otherwise, each system determines where the donor class will be hurt the least and acts accordingly. To ensure the availability of a server capable of processing each of the work requests in the queue, each system determines whether there is a work request in the queue with an affinity only to a subset of the cluster that does not have servers for the queue and, if so, starts a server for the queue on a system in the subset to which the work request has an affinity.