Cluster resource management using adaptive memory demand

    公开(公告)号:US11669369B2

    公开(公告)日:2023-06-06

    申请号:US17466185

    申请日:2021-09-03

    Applicant: VMware, Inc.

    CPC classification number: G06F9/5016 G06F9/505 G06F2209/5022

    Abstract: Various examples are disclosed for cluster resource management using adaptive memory demands. In some examples, a local memory estimate is determined for a workload. The local memory estimate is determined using a memory reclamation parameter for the workload executed by a current host of the workload. A destination memory estimate is also determined for the workload. The destination memory estimate is determined using a full memory estimate unreduced by memory reclamation parameters. The workload is executed using a host that is selected in view of an analysis that uses the local memory estimate for the current host and the destination memory estimate for at least one destination host.

    RESILIENCY AND PERFORMANCE FOR CLUSTER MEMORY

    公开(公告)号:US20230021067A1

    公开(公告)日:2023-01-19

    申请号:US17481352

    申请日:2021-09-22

    Applicant: VMWARE, INC.

    Abstract: Disclosed are various embodiments for improving resiliency and performance of clustered memory. A computing device can acquire a chunk of byte-addressable memory from a cluster memory host. The computing device can then identify an active set of allocated memory pages and an inactive set of allocated memory pages for a process executing on the computing device. Next, the computing device can store the active set of allocated memory pages for the process in the memory of the computing device. Finally, the computing device can store the inactive set of allocated memory pages for the process in the chunk of byte-addressable memory of the cluster memory host.

    PARALLEL CONTEXT SWITCHING FOR INTERRUPT HANDLING

    公开(公告)号:US20220405121A1

    公开(公告)日:2022-12-22

    申请号:US17351488

    申请日:2021-06-18

    Applicant: VMware, Inc.

    Abstract: Disclosed are various embodiments for decreasing the amount of time spent processing interrupts by switching contexts in parallel with processing an interrupt. An interrupt request can be received during execution of a process in a less privileged user mode. Then, the current state of the process can be saved. Next, a switch from the less privileged mode to a more privileged mode can be made. The interrupt request is then processed while in the more privileged mode. Subsequently or in parallel, and possibly prior to completion of the processing the interrupt request, another switch from the more privileged mode to the less privileged mode can be made.

    Efficiently Purging Non-Active Blocks in NVM Regions Using Virtblock Arrays

    公开(公告)号:US20220129377A1

    公开(公告)日:2022-04-28

    申请号:US17571417

    申请日:2022-01-07

    Applicant: VMware, Inc.

    Abstract: Techniques for efficiently purging non-active blocks in an NVM region of an NVM device using virtblocks are provided. In one set of embodiments, a host system can maintain, in the NVM device, a pointer entry (i.e., virtblock entry) for each allocated data block of the NVM region, where page table entries of the NVM region that refer to the allocated data block include pointers to the pointer entry, and where the pointer entry includes a pointer to the allocated data block. The host system can further determine that a subset of the allocated data blocks of the NVM region are non-active blocks and can purge the non-active blocks from the NVM device to a mass storage device, where the purging comprises updating the pointer entry for each non-active block to point to a storage location of the non-active block on the mass storage device.

    Commit coalescing for micro-journal based transaction logging

    公开(公告)号:US11204912B2

    公开(公告)日:2021-12-21

    申请号:US17073221

    申请日:2020-10-16

    Applicant: VMware, Inc.

    Abstract: Techniques for using commit coalescing when performing micro-journal-based transaction logging are provided. In one embodiment a computer system can maintain, in a volatile memory, a globally ascending identifier, a first list of free micro-journals, and a second list of in-flight micro-journals. The computer system can further receive a transaction comprising a plurality of modifications to data or metadata stored in the byte-addressable persistent memory, select a micro-journal from the first list, obtain a lock on the globally ascending identifier, write a current value of the globally ascending identifier as a journal commit identifier into a header of the micro-journal, and write journal entries into the micro-journal corresponding to the plurality of modifications included in the transaction. The computer system can then commit the micro-journal to the byte-addressable persistent memory, increment the current value of the globally ascending identifier, and release the lock.

    Adaptive CPU NUMA scheduling
    98.
    发明授权

    公开(公告)号:US10776151B2

    公开(公告)日:2020-09-15

    申请号:US16292502

    申请日:2019-03-05

    Applicant: VMware, Inc.

    Abstract: Systems and methods for performing selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors are provided. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. The systems and methods further provide for monitoring system characteristics and rescheduling the vCPUs when other placements provide improved performance and efficiency.

    File system interface for remote direct memory access

    公开(公告)号:US10706005B2

    公开(公告)日:2020-07-07

    申请号:US15836577

    申请日:2017-12-08

    Applicant: VMware, Inc.

    Abstract: Exemplary methods, apparatuses, and systems include a distributed memory agent within a first node intercepting an operating system request to open a file from an application running on the first node. The request includes a file identifier, which the distributed memory agent transmits to a remote memory manager. The distributed memory agent receives, from the remote memory manager, a memory location within a second node for the file identifier and information to establish a remote direct memory access channel between the first node and the second node. In response to the request to open the file, the distributed memory agent establishes the remote direct memory access channel between the first node and the second node. The remote direct memory access channel allows the first node to read directly from or write directly to the memory location within the second node while bypassing an operating system of the second node.

    Hypervisor exchange with virtual machines in memory

    公开(公告)号:US10705867B2

    公开(公告)日:2020-07-07

    申请号:US15189115

    申请日:2016-06-22

    Applicant: VMware, Inc.

    Abstract: A hypervisor-exchange process includes: suspending, by an “old” hypervisor, resident virtual machines; exchanging the old hypervisor for a new hypervisor, and resuming, by the new hypervisor, the resident virtual machines. The suspending can include “in-memory” suspension of the virtual machines until the virtual machines are resumed by the new hypervisor. Thus, there is no need to load the virtual machines from storage prior to the resuming. As a result, any interruption of the virtual machines is minimized. In some embodiments, the resident virtual machines are migrated onto one or more host virtual machines to reduce the number of virtual machines being suspended.

Patent Agency Ranking