摘要:
Controlled partition shut-down is provided within a shared memory partition data processing system including a shared memory partition, a paging service partition, a hypervisor and a shared memory pool within physical memory. The hypervisor manages access to logical pages within the pool and page-out of pages from the pool to external paging storage via the paging service partition. A respective paging service stream exists between the paging service partition and hypervisor for each shared memory partition, with each stream including a stream state. The control method includes: responsive to a shut-down initiating event, notifying the paging service partition to shut down, and determining whether a shared memory partition is currently active, and if so, signaling the hypervisor to complete paging activity for the active memory partition and waiting for its stream state to enter a suspended or a completed state before automatically shutting down the paging service partition.
摘要:
A mechanism is provided for directed resource folding for power management. The mechanism receives a set of static platform characteristics and a set of dynamic platform characteristics for a set of resources associated with the data processing system thereby forming characteristic information. The mechanism determines whether one or more conditions have been met for each resource in the set of resources using the characteristic information. Responsive to the one or more conditions being met, the mechanism performs a resource optimization to determine at least one of a first subset of resources in the set of resources to keep active and a second subset of resources in the set of resources to dynamically fold. Based on the resource optimization, the mechanism performs either a virtual resource optimization to optimally schedule the first subset of resources or a physical resource optimization to dynamically fold the second subset of resources.
摘要:
Accounting charges are assigned to workloads by measuring a relative use of computing resources by the workloads, then scaling the results using determined work-rate for the corresponding workload. Usage metrics for the individual resources may be selectable for the resources being measured and the work-rates may be determined from an analytical model or from empirical model that determines work-rates from an indication of processor throughput. Under single workload conditions on a platform, or other suitable conditions, a workload type may be used to select the particular usage metrics applied for the various resources.
摘要:
Methods, systems, and computer program products are provided for dynamic selective memory mirroring in solid state devices. An amount of memory is reserved. Sections of the memory to select for mirroring in the reserved memory are dynamically determined. The selected sections of the memory contain critical areas. The selected sections of the memory are mirrored in the reserved memory.
摘要:
An apparatus and method selectively invalidate entries in an address translation cache instead of invalidating all, or nearly all, entries. One or more translation mode bits are provided in each entry in the address translation cache. These translation mode bits may be set according to the addressing mode used to create the cache entry. One or more “hint bits” are defined in an instruction that allow specifying which of the entries in the address translation cache are selectively preserved during an invalidation operation according to the value(s) of the translation mode bit(s). In the alternative, multiple instructions may be defined to preserve entries in the address translation cache that have specified addressing modes. In this manner, more intelligence is used to recognize that some entries in the address translation cache may be valid after a task or partition switch, and may therefore be retained, while other entries are invalidated.
摘要:
Replacing a failing physical processor in a computer supporting multiple logical partitions, where the logical partitions include dedicated partitions and shared processor partitions, the dedicated partitions are supported by virtual processors having assigned physical processors, and the shared processor partitions are supported by pools of virtual processors. The pools of virtual processors have assigned physical processors. Embodiments operate generally by assigning priorities to the dedicated partitions and to the pools of virtual processors; detecting a checkstop of a failing physical processor; retrieving the failing physical processor's state; replacing by a hypervisor the failing physical processor with a replacement physical processor assigned to a dedicated partition or pool, which dedicated partition or pool has the lowest priority among the priorities of the dedicated partitions and pools; and assigning the retrieved state of the failing physical processor as the state of the replacement physical processor.
摘要:
Methods, systems, and articles of manufacture for allowing an update to an executable component, such as a logical partitioning operating system, running on a computer system without requiring a reboot (or IPL) of the computer system are provided. Processors or tasks executing in a portion of code being updated may be forced to a known or “quiesced” state (e.g., designated wait points) before applying the update. If any of the processors or tasks are not in their quiesced state, the update is not applied or may be rescheduled for a later time, in an effort to allow the system to reach the quiesced state.
摘要:
A directory-based coherency method, system and program are provided for intervening a requested cache line from a plurality of candidate memory sources in a multiprocessor system on the basis of the sensed temperature or power dissipation value at each memory source. By providing temperature or power dissipation sensors in each of the candidate memory sources (e.g., at cores, cache memories, memory controller, etc.) that share a requested cache line, control logic may be used to determine which memory source should source the cache line by using the power sensor signals to signal only the memory source with acceptable power dissipation to provide the cache line to the requester.
摘要:
A method and system for selecting the architecture level to which a processor appears to conform within a computing environment when executing specific logical partitions or programs and performing migration among different levels of processor architecture. The method utilizes a “processor compatibility register” (PCR) that controls the level of the architecture that the processor appears to support. In one embodiment, the PCR is accessible only to super-privileged software. The super-privileged software sets bits in the PCR that specify the architecture level that the processor is to appear to support so that when the program runs on the processor, the processor behaves in accordance with the architecture level for which the program was designed.
摘要:
A method, apparatus, and computer-usable medium for predicting capacity on a data processing system. According to a preferred embodiment of the present invention, a capacity manager estimates an incremental power value utilize for activating and maintaining an additional processor in the data processing system. The capacity manager calculates a power headroom value. In response to calculating the power headroom value, the capacity manager determines whether an adjustment power headroom value is required. The capacity manager calculates a projected capacity value added by the additional processor to the data processing system.