摘要:
A snoop coherency method, system and program are provided for intervening a requested cache line from a plurality of candidate memory sources in a multiprocessor system on the basis of the sensed temperature or power dissipation value at each memory source. By providing temperature or power dissipation sensors in each of the candidate memory sources (e.g., at cores, cache memories, memory controller, etc.) that share a requested cache line, control logic may be used to determine which memory source should source the cache line by using the power sensor signals to signal only the memory source with acceptable power dissipation to provide the cache line to the requester.
摘要:
A method and apparatus for transparently handling recurring correctable errors and uncorrectable errors in a mirrored memory system prevents costly system shutdowns for correctable memory errors or system failures from uncorrectable memory errors. When a high number of correctable errors are detected for a given memory location, a memory relocation mechanism in the hypervisor moves the data associated with the memory location to an alternate physical memory location transparently to the partition such that the partition has no knowledge that the physical memory actualizing the memory location has been changed. When a correctable error occurs, the memory relocation mechanism uses data from a partner mirrored memory block as a data source for the memory block with the uncorrectable error and then relocates the data to a newly allocated memory block to replace the memory block with the uncorrectable error.
摘要:
Replacing a failing physical processor in a computer supporting multiple logical partitions, where the logical partitions include dedicated partitions and shared processor partitions, the dedicated partitions are supported by virtual processors having assigned physical processors, and the shared processor partitions are supported by pools of virtual processors. The pools of virtual processors have assigned physical processors. Embodiments operate generally by assigning priorities to the dedicated partitions and to the pools of virtual processors; detecting a checkstop of a failing physical processor; retrieving the failing physical processor's state; replacing by a hypervisor the failing physical processor with a replacement physical processor assigned to a dedicated partition or pool, which dedicated partition or pool has the lowest priority among the priorities of the dedicated partitions and pools; and assigning the retrieved state of the failing physical processor as the state of the replacement physical processor.
摘要:
Assigning a processor to a logical partition in a computer supporting multiple logical partitions that include assigning priorities to partitions, detecting a checkstop of a failing processor of a partition, retrieving the failing processor's state, replacing by a hypervisor the failing processor with a replacement processor from a partition having a priority lower than the priority of the partition of the failing processor, and assigning the retrieved state of the failing processor as the state of the replacement processor.
摘要:
The present invention thus provides for a method, system, and computer-usable medium that afford an equitably charging of a customer for computer usage time. In a preferred embodiment, the method includes the steps of: tracking an amount of computer resources in a Simultaneous Multithreading (SMT) computer that are available to a customer for a specified period of time; determining if the computer resources in the SMT computer are operating at a nominal rate; and in response to determining that the computer resources are operating at a non-nominal rate, adjusting a billing charge to the customer, wherein the billing charge reflects that the customer has available computer resources, in the SMT computer, that are not operating at the nominal rate during the specified period of time.
摘要:
An entitlement management system for distributing spare CPU processor resources to a plurality of deployment groups operating in a data processing system, the system comprising: a deployment group entitlement component comprising: an allocation component for allocating a plurality of micro-partitions to a deployment group; a determining component for identifying spare CPU processor cycles from a donor micro-partition and distributing the identified spare CPU processor cycles to a requester micro-partition in the deployment group; the determining component further comprises identifying when there are no further spare CPU processor cycles to be donated to any of the micro-partitions in the deployment group and communicating a request to a management entitlement component; and a management entitlement component receiving requests from at least two deployment group entitlement components and identifying if one of the deployment groups has spare CPU processor cycles to donate to a further deployment group and on a positive determination donating the spare CPU cycles to the further deployment group.
摘要:
An apparatus, program product and method support the deallocation of a data structure in a multithreaded computer without requiring the use of computationally expensive semaphores or spin locks. Specifically, access to a data structure is governed by a shared pointer that, when a request is received to deallocate the data structure, is initially set to a value that indicates to any thread that later accesses the pointer that the data structure is not available. In addition, to address any thread that already holds a copy of the shared pointer, and thus is capable of accessing the data structure via the shared pointer after the initiation of the request, all such threads are monitored to determine whether any thread is still using the shared pointer by determining whether any thread is executing program code that is capable of using the shared pointer to access the data structure. Once this condition is met, it is ensured that no thread can potentially access the data structure via the shared pointer, and as such, the data structure may then be deallocated.
摘要:
A method, system, and article of manufacture for processing virtual interrupts in a logically partitioned system are provided. An intelligent virtual global interrupt queue (virtual GIQ) that may be associated with a plurality of virtual processors running in a logical partition may be utilized. Upon receiving a virtual interrupt, the virtual GIQ may examine the operating states of the associated virtual processors. In an effort to ensure the virtual interrupt is processed as quickly as possible, the virtual GIQ may present the virtual interrupt to one of the associated virtual processors determined to be in an operating state best suited for processing the virtual interrupt.
摘要:
A logically-partitioned computer, program product and method utilize a flexible and adaptable communication interface between a partition and a partition manager, which permits optimal handling of partition management operations such as state change operations and the like over a wide variety of circumstances. In particular, a partition is permitted to indicate, in connection with a request to perform a partition management operation, whether an asynchronous notification should be generated or suppressed in association with the performance of the partition management operation by a partition manager. As a result, asynchronous notifications are selectively generated in association with the performance of partition management operations based upon indications in the requests made by partitions for such operations.
摘要:
A method, apparatus and computer instructions are provided to autonomically monitor and adjust system characteristics based on a customer optimization goal specified in a policy or profile. An autonomic management component is implemented in firmware comprising a set of control algorithms. Response to reading system characteristics from a plurality of sensors, the autononmic management component selects at least one control algorithm from the set and the control algorithm adjusts the parameters of the system characteristic to optimize performance according to the optimization goal specified by the customer.