摘要:
Mechanisms for reducing memory overhead of a page table in a dynamic logical partitioning (LPAR) environment are provided. Each LPAR, upon its creation, is allowed to declare any maximum main memory size for the LPAR as long as the aggregate maximum main memory size for all LPARs does not exceed the total amount of available main memory. A single page table is used for all of the LPARs. Thus, the only page table in the computing system is shared by all LPARs and every memory access operation from any LPAR must go through the same page table for address translation. As a result, since only one page table is utilized, and the aggregate size of the main memory apportioned to each of the LPARs is limited to the size of the main memory, the size of the page table cannot exceed the size of the main memory.
摘要:
A system and method for reducing memory overhead of a page table in a dynamic logical partitioning (LPAR) environment are provided. Each LPAR, upon its creation, is allowed to declare any maximum main memory size for the LPAR as long as the aggregate maximum main memory size for all LPARs does not exceed the total amount of available main memory. A single page table is used for all of the LPARs. Thus, the only page table in the computing system is shared by all LPARs and every memory access operation from any LPAR must go through the same page table for address translation. As a result, since only one page table is utilized, and the aggregate size of the main memory apportioned to each of the LPARs is limited to the size of the main memory, the size of the page table cannot exceed the size of the main memory.
摘要:
A processor thread load balancing manager employs an operating system of an information handling system (IHS) that determines a process tree of data sharing threads in an application that the IHS executes. The load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS).
摘要:
A processor thread load balancing manager employs an operating system of an information handling system (IHS) that determines a process tree of data sharing threads in an application that the IHS executes. The load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS).
摘要:
A mechanism for generating pre-translated segments for use in virtual to real address translation is provided in which segments that are determined to meet a density threshold are promoted to a pre-translated segment class. The pages of these segments are moved to a contiguous portion of memory and the segment table entry corresponding to the segment is updated to indicate the segment to be a pre-translated segment and to include the base real address for the contiguous portion of memory. In one embodiment, as each page is moved, its page table entry is updated to point to the new location of the page so that the page is still accessible during promotion of the segment to a pre-translated segment. In this way, virtual-to-real address translation may be performed by concatenating the segment base real address, the page identifier, and a byte offset into the page.
摘要:
Techniques for preserving memory affinity in a computer system is disclosed. In response to a request for memory access to a page within a memory affinity domain, a determination is made if the request is initiated by a processor associated with the memory affinity domain. If the request is not initiated by a processor associated with the memory affinity domain, a determination is made if there is a page ID match with an entry within a page migration tracking module associated with the memory affinity domain. If there is no page ID match, an entry is selected within the page migration tracking module to be updated with a new page ID and a new memory affinity ID. If there is a page ID match, then another determination is made whether or not there is a memory affinity ID match with the entry with the page ID field match. If there is no memory affinity ID match, the entry is updated with a new memory affinity ID; and if there is a memory affinity ID match, an access counter of the entry is incremented.
摘要:
A system and method for allowing jobs originating from different partitions to simultaneously utilize different hardware threads on a processor by concatenating partition identifiers with virtual page identifiers within a processor's translation lookaside buffer is presented. The device includes a translation lookaside buffer that translates concatenated virtual addresses to system-wide real addresses. The device generates concatenated virtual addresses using a partition identifier, which corresponds to a job's originating partition, and a virtual page identifier, which corresponds to the executing instruction, such as an instruction address or data address. In turn, each concatenated virtual address is different, which translates in the translation lookaside buffer to a unique system-wide real address. As such, jobs originating from different partitions are able to simultaneously execute on the device and, therefore, fully utilize each of the device's hardware threads.
摘要:
A mechanism for generating pre-translated segments for use in virtual to real address translation is provided in which segments that are determined to meet a density threshold are promoted to a pre-translated segment class. The pages of these segments are moved to a contiguous portion of memory and the segment table entry corresponding to the segment is updated to indicate the segment to be a pre-translated segment and to include the base real address for the contiguous portion of memory. In one embodiment, as each page is moved, its page table entry is updated to point to the new location of the page so that the page is still accessible during promotion of the segment to a pre-translated segment. In this way, virtual-to-real address translation may be performed by concatenating the segment base real address, the page identifier, and a byte offset into the page.
摘要:
An operating system of an information handling system (IHS) determines a process tree of data sharing threads in an application that the IHS executes. A load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS).
摘要:
A system and method for allowing jobs originating from different partitions to simultaneously utilize different hardware threads on a processor by concatenating partition identifiers with virtual page identifiers within a processor's translation lookaside buffer is presented. The device includes a translation lookaside buffer that translates concatenated virtual addresses to system-wide real addresses. The device generates concatenated virtual addresses using a partition identifier, which corresponds to a job's originating partition, and a virtual page identifier, which corresponds to the executing instruction, such as an instruction address or data address. In turn, each concatenated virtual address is different, which translates in the translation lookaside buffer to a unique system-wide real address. As such, jobs originating from different partitions are able to simultaneously execute on the device and, therefore, fully utilize each of the device's hardware threads.