System and method to prioritize large memory page allocation in virtualized systems
    1.
    发明授权
    System and method to prioritize large memory page allocation in virtualized systems 有权
    在虚拟化系统中优先处理大型内存页分配的系统和方法

    公开(公告)号:US09116829B2

    公开(公告)日:2015-08-25

    申请号:US14321174

    申请日:2014-07-01

    Applicant: VMware, Inc.

    Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.

    Abstract translation: 大存储器页面映射的优先级是L1页表中的访问位的函数。 在第一操作阶段,周期性地对L1页表中的每一个中的设置访问比特数进行计数,并从其计算当前计数值。 在第一阶段,即使识别页面也不会映射大页面。 在第一阶段之后,当前的计数值用于在潜在的大内存页之间确定哪些页映射大。 即使在第一阶段结束之后,系统继续计算当前计数值。 当使用硬件辅助时,使用嵌套页表中的访问位,当使用软件MMU时,影子页表中的访问位用于大页面优先级。

    System and method to prioritize large memory page allocation in virtualized systems
    2.
    发明授权
    System and method to prioritize large memory page allocation in virtualized systems 有权
    在虚拟化系统中优先处理大型内存页分配的系统和方法

    公开(公告)号:US08769184B2

    公开(公告)日:2014-07-01

    申请号:US13753322

    申请日:2013-01-29

    Applicant: VMware, Inc.

    Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.

    Abstract translation: 大存储器页面映射的优先级是L1页表中的访问位的函数。 在第一操作阶段,周期性地对L1页表中的每一个中的设置访问比特数进行计数,并从其计算当前计数值。 在第一阶段,即使识别页面也不会映射大页面。 在第一阶段之后,当前的计数值用于在潜在的大内存页之间确定哪些页映射大。 即使在第一阶段结束之后,系统继续计算当前计数值。 当使用硬件辅助时,使用嵌套页表中的访问位,当使用软件MMU时,影子页表中的访问位用于大页面优先级。

    System and method for improving memory locality of virtual machines
    3.
    发明授权
    System and method for improving memory locality of virtual machines 有权
    提高虚拟机内存局部性的系统和方法

    公开(公告)号:US08719545B2

    公开(公告)日:2014-05-06

    申请号:US13670223

    申请日:2012-11-06

    Applicant: VMware, Inc.

    CPC classification number: G06F9/5033 G06F9/45558 G06F9/4856 G06F2009/4557

    Abstract: A system and related method of operation for migrating the memory of a virtual machine from one NUMA node to another. Once the VM is migrated to a new node, migration of memory pages is performed while giving priority to the most utilized pages, so that access to these pages becomes local as soon as possible. Various heuristics are described to enable different implementations for different situations or scenarios.

    Abstract translation: 将虚拟机的存储器从一个NUMA节点迁移到另一个NUMA节点的系统和相关操作方法。 一旦将VM迁移到新节点,就会在优先使用最多的页面的同时执行内存页面的迁移,以便尽快访问这些页面。 描述了各种启发式方法,以实现不同情况或场景的不同实现。

Patent Agency Ranking