摘要:
Multiple logical partitions are provided access to a self-virtualizing input/output device of a data processing system via multiple dedicated partition adjunct instances. Access is established by: interfacing each logical partition to one or more associated partition adjunct instances, each partition adjunct instance coupling its associated logical partition to one of a virtual function or a queue pair of the self-virtualizing input/output device, and each partition adjunct instance being a separate dispatchable state and being created employing virtual address space donated from the respective logical partition or a hypervisor of the data processing system, and each partition adjunct instance including a device driver for the virtual function or queue pair of the self-virtualizing input/output device; and providing each logical partition with at least one virtual input/output which is interfaced through the logical partition's respective partition adjunct instance(s) to a virtual function or queue pair of the self-virtualizing input/output device.
摘要:
A mechanism is provided for temporarily allocating dedicated processors to a shared processor pool. A virtual machine monitor determines whether a temporary allocation associated with an identified dedicated processor is long-term or short-term. Responsive to the temporary allocation being long-term, the virtual machine monitor determines whether an operating frequency of the identified dedicated processor is within a predetermined threshold of an operating frequency of one or more operating systems utilizing the shared processor pool. Responsive to the operating frequency of the identified dedicated processor failing to be within the predetermined threshold, the virtual machine monitor either increases or decreases the frequency of the identified dedicated processor to be within the predetermined threshold of the operating frequency of the one or more operating systems utilizing the shared processor pool and temporarily allocates the identified dedicated processor to the shared processor pool.
摘要:
Transparent hypervisor pinning of critical memory areas is provided for a shared memory partition data processing system. The transparent hypervisor pinning includes receiving at a hypervisor a hypervisor call initiated by a logical partition to register a logical memory area of the logical partition with the hypervisor. Responsive to this hypervisor call, the hypervisor transparently determines whether the logical memory is a critical memory area for access by the hypervisor. If the logical memory area is a critical memory area, then the hypervisor automatically pins the logical memory area to physical memory of the shared memory partition data processing system, thereby ensuring that the memory area will not be paged-out from physical memory to external storage, and thus ensuring availability of the logic memory area to the hypervisor.
摘要:
A mechanism is provided for transparently consolidating resources of logical partitions. Responsive to the existence of the non-folded resource on an originating resource chip, the virtualization mechanism determines whether there is a destination resource chip to either exchange operations of the non-folded resource with a folded resource on the destination chip or migrate operations of the non-folded resource to a non-folded resource on the destination chip. Responsive to the existence of the folded resource on the destination resource chip, the virtualization mechanism transparently exchanges the operations of the non-folded resource from the originating resource chip to the folded resource on the destination resource chip, where the folded resource remains folded on the originating resource chip after the exchange. Responsive to the absence of another non-folded resource on the originating resource chip, the vitalization mechanism places the originating resource chip into a deeper power saving mode.
摘要:
An entitlement management system for distributing spare CPU processor resources to a plurality of deployment groups operating in a data processing system, the system comprising: a deployment group entitlement component comprising: an allocation component for allocating a plurality of micro-partitions to a deployment group; a determining component for identifying spare CPU processor cycles from a donor micro-partition and distributing the identified spare CPU processor cycles to a requester micro-partition in the deployment group; the determining component further comprises identifying when there are no further spare CPU processor cycles to be donated to any of the micro-partitions in the deployment group and communicating a request to a management entitlement component; and a management entitlement component receiving requests from at least two deployment group entitlement components and identifying if one of the deployment groups has spare CPU processor cycles to donate to a further deployment group and on a positive determination donating the spare CPU cycles to the further deployment group.
摘要:
A firmware update process for a self-virtualizing IO resource such as an SRIOV adapter is incorporated into a platform firmware update process to systematically update the resource firmware in a manner that is for the most part transparent to the logical partitions sharing the adapter. In particular, resource firmware associated with a self-virtualizing IO resource is bundled with firmware for at least one adjunct partition associated with that self-virtualizing IO resource within a common firmware image so that, upon restart of the adjunct partition to use the updated firmware image, the resource firmware is also updated, with a logical partition that uses the self-virtualizing IO resource maintained in an active state during the restart, and without requiring the self-virtualizing IO resource to be deconfigured from the logical partition.
摘要:
An apparatus, program product and method support the deallocation of a data structure in a multithreaded computer without requiring the use of computationally expensive semaphores or spin locks. Specifically, access to a data structure is governed by a shared pointer that, when a request is received to deallocate the data structure, is initially set to a value that indicates to any thread that later accesses the pointer that the data structure is not available. In addition, to address any thread that already holds a copy of the shared pointer, and thus is capable of accessing the data structure via the shared pointer after the initiation of the request, all such threads are monitored to determine whether any thread is still using the shared pointer by determining whether any thread is executing program code that is capable of using the shared pointer to access the data structure. Once this condition is met, it is ensured that no thread can potentially access the data structure via the shared pointer, and as such, the data structure may then be deallocated.
摘要:
Controlled partition shut-down is provided within a shared memory partition data processing system including a shared memory partition, a paging service partition, a hypervisor and a shared memory pool within physical memory. The hypervisor manages access to logical pages within the pool and page-out of pages from the pool to external paging storage via the paging service partition. A respective paging service stream exists between the paging service partition and hypervisor for each shared memory partition, with each stream including a stream state. The control method includes: responsive to a shut-down initiating event, notifying the paging service partition to shut down, and determining whether a shared memory partition is currently active, and if so, signaling the hypervisor to complete paging activity for the active memory partition and waiting for its stream state to enter a suspended or a completed state before automatically shutting down the paging service partition.
摘要:
Transparent hypervisor pinning of critical memory areas is provided for a shared memory partition data processing system. The transparent hypervisor pinning includes receiving at a hypervisor a hypervisor call initiated by a logical partition to register a logical memory area of the logical partition with the hypervisor. Responsive to this hypervisor call, the hypervisor transparently determines whether the logical memory is a critical memory area for access by the hypervisor. If the logical memory area is a critical memory area, then the hypervisor automatically pins the logical memory area to physical memory of the shared memory partition data processing system, thereby ensuring that the memory area will not be paged-out from physical memory to external storage, and thus ensuring availability of the logic memory area to the hypervisor.
摘要:
A mechanism is provided for directed resource folding for power management. The mechanism receives a set of static platform characteristics and a set of dynamic platform characteristics for a set of resources associated with the data processing system thereby forming characteristic information. The mechanism determines whether one or more conditions have been met for each resource in the set of resources using the characteristic information. Responsive to the one or more conditions being met, the mechanism performs a resource optimization to determine at least one of a first subset of resources in the set of resources to keep active and a second subset of resources in the set of resources to dynamically fold. Based on the resource optimization, the mechanism performs either a virtual resource optimization to optimally schedule the first subset of resources or a physical resource optimization to dynamically fold the second subset of resources.