EFFICIENTLY PURGING NON-ACTIVE BLOCKS IN NVM REGIONS USING POINTER ELIMINATION

    公开(公告)号:US20200133846A1

    公开(公告)日:2020-04-30

    申请号:US16174264

    申请日:2018-10-29

    Applicant: VMware, Inc.

    Abstract: Techniques for efficiently purging non-active blocks in an NVM region of an NVM device using pointer elimination are provided. In one set of embodiments, a host system can, for each level 1 (L1) page table entry of each snapshot of the NVM region, determine whether a data block of the NVM region that is pointed to by the L1 page table entry is a non-active block, and if the data block is a non-active block, remove a pointer to the data block in the L1 page table entry and reduce a reference count parameter associated with the data block by 1. If the reference count parameter has reached zero at this point, the host system purge the data block from the NVM device to the mass storage device.

    HIERARCHICAL RESOURCE TREE MEMORY OPERATIONS
    34.
    发明申请

    公开(公告)号:US20190171390A1

    公开(公告)日:2019-06-06

    申请号:US15830850

    申请日:2017-12-04

    Applicant: VMware, Inc.

    Abstract: Hierarchical resource tree memory operations can include receiving, at a memory scheduler, an indication of a proposed modification to a value of a memory parameter of an object represented by a node of a hierarchical resource tree, wherein the proposed modification is made by a modifying entity, locking the node of the hierarchical resource tree by the memory scheduler, performing the proposed modification by the memory scheduler, wherein performing the proposed modification includes creating a working value of the memory parameter according to the proposed modification, determining whether the proposed modification violates a structural consistency of the hierarchical resource tree based on the working value, and replacing the value of the memory parameter with the working value of the memory parameter in response to determining that the proposed modification does not violate a structural consistency of the hierarchical resource tree based on the working value, and unlocking the node of the hierarchical resource tree by the memory scheduler.

    HIGH AVAILABILITY FOR PERSISTENT MEMORY
    35.
    发明申请

    公开(公告)号:US20180322023A1

    公开(公告)日:2018-11-08

    申请号:US15586020

    申请日:2017-05-03

    Applicant: VMware, Inc.

    Abstract: Techniques for implementing high availability for persistent memory are provided. In one embodiment, a first computer system can detect an alternating current (AC) power loss/cycle event and, in response to the event, can save data in a persistent memory of the first computer system to a memory or storage device that is remote from the first computer system and is accessible by a second computer system. The first computer system can then generate a signal for the second computer system subsequently to initiating or completing the save process, thereby allowing the second computer system to restore the saved data from the memory or storage device into its own persistent memory.

    System and method to prioritize large memory page allocation in virtualized systems
    38.
    发明授权
    System and method to prioritize large memory page allocation in virtualized systems 有权
    在虚拟化系统中优先处理大型内存页分配的系统和方法

    公开(公告)号:US09116829B2

    公开(公告)日:2015-08-25

    申请号:US14321174

    申请日:2014-07-01

    Applicant: VMware, Inc.

    Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.

    Abstract translation: 大存储器页面映射的优先级是L1页表中的访问位的函数。 在第一操作阶段,周期性地对L1页表中的每一个中的设置访问比特数进行计数,并从其计算当前计数值。 在第一阶段,即使识别页面也不会映射大页面。 在第一阶段之后,当前的计数值用于在潜在的大内存页之间确定哪些页映射大。 即使在第一阶段结束之后,系统继续计算当前计数值。 当使用硬件辅助时,使用嵌套页表中的访问位,当使用软件MMU时,影子页表中的访问位用于大页面优先级。

    TRACKING GUEST MEMORY CHARACTERISTICS FOR MEMORY SCHEDULING
    39.
    发明申请
    TRACKING GUEST MEMORY CHARACTERISTICS FOR MEMORY SCHEDULING 有权
    跟踪用于记忆调度的记忆特征

    公开(公告)号:US20150161056A1

    公开(公告)日:2015-06-11

    申请号:US14101796

    申请日:2013-12-10

    Applicant: VMware, Inc.

    Abstract: A system and method are disclosed for improving operation of a memory scheduler operating on a host machine supporting virtual machines (VMs) in which guest operating systems and guest applications run. For each virtual machine, the host machine hypervisor categorizes memory pages into memory usage classes and estimates the total number of pages for each memory usage class. The memory scheduler uses this information to perform memory reclamation and allocation operations for each virtual machine. The memory scheduler further selects between ballooning reclamation and swapping reclamation operations based in part on the numbers of pages in each memory usage class for the virtual machine. Calls to the guest operating system provide the memory usage class information. Memory reclamation not only can improve the performance of existing VMs, but can also permit the addition of a VM on the host machine without substantially impacting the performance of the existing and new VMs.

    Abstract translation: 公开了一种系统和方法,用于改进在支持客机操作系统和来宾应用运行的虚拟机(VM)的主机上运行的存储器调度器的操作。 对于每个虚拟机,主机管理程序将内存页分为内存使用类,并估计每个内存使用类的总页数。 内存调度器使用该信息为每个虚拟机执行内存回收和分配操作。 存储器调度器还部分地基于虚拟机的每个存储器使用类别中的页数来选择气球回收和交换回收操作之间。 对客户机操作系统的调用提供了内存使用类信息。 内存回收不仅可以提高现有虚拟机的性能,还可以允许在主机上添加虚拟机,而不会对现有和新的虚拟机造成实质性的影响。

    REMOTE DIRECT MEMORY ACCESS (RDMA)-BASED RECOVERY OF DIRTY DATA IN REMOTE MEMORY

    公开(公告)号:US20230315593A1

    公开(公告)日:2023-10-05

    申请号:US18331019

    申请日:2023-06-07

    Applicant: VMware, Inc.

    Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.

Patent Agency Ranking