Abstract:
Relocating data in a virtualized environment maintained by a hypervisor administering access to memory with a Cache Page Table (‘CPT’) and a Physical Page Table (‘PPT’), the CPT and PPT including virtual to physical mappings. Relocating data includes converting the virtual to physical mappings of the CPT to virtual to logical mappings; establishing a Logical Memory Block (‘LMB’) relocation tracker that includes logical addresses of an LMB, source physical addresses of the LMB, target physical addresses of the LMB, a translation block indicator for each relocation granule, and a pin count associated with each relocation granule; establishing a PPT entry tracker including PPT entries corresponding to the LMB to be relocated; relocating the LMB in a number of relocation granules including blocking translations to the relocation granules during relocation; and removing the logical addresses from the LMB relocation tracker.
Abstract:
In an embodiment, a target number of discretionary pages is calculated for a first partition. If the target number of discretionary pages for the first partition is less than a number of the discretionary pages that are allocated to the first partition, a result page is found that is allocated to the first partition and the result page is deallocated from the first partition. If the target number of discretionary pages for the first partition is greater than the number of the discretionary pages that are allocated to the first partition, a free page is allocated to the first partition.
Abstract:
Dynamic control of memory affinity is provided for a shared memory logical partition within a shared memory partition data processing system having a plurality of nodes. The memory affinity control approach includes: determining one or more home node assignments for the shared memory logical partition, with each assigned home node being one node of the plurality of nodes of the system; determining a desired physical page level per node for the shared memory logical partition; and allowing the shared memory partition to run and using the home node assignment(s) and its desired physical page level(s) in the dispatching of tasks to physical processors in the nodes and in hypervisor page memory management to dynamically control memory affinity of the shared memory logical partition in the data processing system.
Abstract:
Hypervisor page fault processing logic is provided for a shared memory partition data processing system. The logic, responsive to an executing virtual processor of the shared memory partition data processing system encountering a hypervisor page fault, allocates an input/output (I/O) paging request to the virtual processor from an I/O paging request pool and increments an outstanding I/O paging request count for the virtual processor. A determination is then made whether the outstanding I/O paging request count for the virtual processor is at a predefined threshold, and if not, the logic places the virtual processor in a wait state with interrupt wake-up reasons enabled based on the virtual processor's state, otherwise, it places the virtual processor in a wait state with interrupt wake-up reasons disabled.
Abstract:
In an embodiment, a target number of discretionary pages is calculated for a first partition. If the target number of discretionary pages for the first partition is less than a number of the discretionary pages that are allocated to the first partition, a result page is found that is allocated to the first partition and the result page is deallocated from the first partition. If the target number of discretionary pages for the first partition is greater than the number of the discretionary pages that are allocated to the first partition, a free page is allocated to the first partition.
Abstract:
Relocating data in a virtualized environment maintained by a hypervisor administering access to memory with a Cache Page Table (‘CPT’) and a Physical Page Table (‘PPT’), the CPT and PPT including virtual to physical mappings. Relocating data includes converting the virtual to physical mappings of the CPT to virtual to logical mappings; establishing a Logical Memory Block (‘LMB’) relocation tracker that includes logical addresses of an LMB, source physical addresses of the LMB, target physical addresses of the LMB, a translation block indicator for each relocation granule, and a pin count associated with each relocation granule; establishing a PPT entry tracker including PPT entries corresponding to the LMB to be relocated; relocating the LMB in a number of relocation granules including blocking translations to the relocation granules during relocation; and removing the logical addresses from the LMB relocation tracker.
Abstract:
Relocating data in a virtualized environment maintained by a hypervisor administering access to memory with a Cache Page Table (‘CPT’) and a Physical Page Table (‘PPT’), the CPT and PPT including virtual to physical mappings. Relocating data includes converting the virtual to physical mappings of the CPT to virtual to logical mappings; establishing a Logical Memory Block (‘LMB’) relocation tracker that includes logical addresses of an LMB, source physical addresses of the LMB, target physical addresses of the LMB, a translation block indicator for each relocation granule, and a pin count associated with each relocation granule; establishing a PPT entry tracker including PPT entries corresponding to the LMB to be relocated; relocating the LMB in a number of relocation granules including blocking translations to the relocation granules during relocation; and removing the logical addresses from the LMB relocation tracker.
Abstract:
In response to a hypervisor page fault for memory that is not resident in a shared memory pool, an I/O paging request is sent to an external storage paging space. In response to a paging service partition encountering an I/O paging error, a paging failure indication is sent to the hypervisor. A simulated machine check interrupt instruction is sent from the hypervisor to the shared memory partition and a machine check handler obtains control. The machine check handler performs data analysis utilizing an error log in an attempt to isolate the I/O paging error to a process or a set of processes in the shared memory partition. The process or set of processes associated with the I/O paging error, or the shared memory partition itself, may be terminated. Finally, the shared memory partition may clear or initialize the page associated with the I/O paging error.
Abstract:
A method, apparatus, system, and signal-bearing medium that in an embodiment associate a persistent indicator with allocated memory and determine whether to preserve the contents of the allocated memory during an IPL (Initial Program Load) based on the persistent indicator. If the persistent indicator associated with the memory is on, the contents of that memory are preserved, and if the persistent indicator is off, the contents of that memory are discarded.
Abstract:
Hypervisor managed memory paging is provided in a data processing system having multiple logical partitions. The data processing system includes a shared memory pool defined within physical memory. The shared memory pool includes a volume of physical memory with dynamically adjustable sub-volumes or sets of physical pages associated with the multiple logical partitions. Each sub-volume or set is associated with a particular logical partition and includes mapped logical memory pages for that logical partition. A hypervisor memory manager interfaces the multiple logical partitions and the shared memory pool, and manages access to logical memory pages within the shared memory pool. The hypervisor memory manager further manages page-out and page-in of logical memory pages from the shared memory pool to one or more external paging devices. This page-out and page-in managing by the hypervisor memory manager is transparent to the multiple logical partitions.