Dynamic virtual machine sizing
    21.
    发明授权

    公开(公告)号:US09785460B2

    公开(公告)日:2017-10-10

    申请号:US13886360

    申请日:2013-05-03

    Applicant: VMware, Inc.

    Inventor: Haoqiang Zheng

    CPC classification number: G06F9/45558 G06F9/5077

    Abstract: A technique is described for managing processor (CPU) resources in a host having virtual machines (VMs) executed thereon. A target size of a VM is determined based on its demand and CPU entitlement. If the VM's current size exceeds the target size, the technique dynamically changes the size of a VM in the host by increasing or decreasing the number of virtual CPUs available to the VM. To “deactivate” virtual CPUs, a high-priority balloon thread is launched and pinned to one of the virtual CPUs targeted for deactivation, and the underlying hypervisor deschedules execution of the virtual CPU accordingly. To “activate” virtual CPUs and increase the number of virtual CPUs available to the VM, the launched balloon thread may be killed.

    Power-Aware Scheduling
    22.
    发明申请
    Power-Aware Scheduling 有权
    电源意识调度

    公开(公告)号:US20150212860A1

    公开(公告)日:2015-07-30

    申请号:US14167213

    申请日:2014-01-29

    Applicant: VMware, Inc.

    CPC classification number: G06F9/5094 G06F9/5083 G06F9/5088 Y02D10/22 Y02D10/32

    Abstract: Systems and techniques are described for power-aware scheduling. One of the techniques includes monitoring execution of a plurality of groups of software threads executing on a physical machine, wherein the physical machine comprises a physical hardware platform that includes a plurality of processor packages having a plurality of package power states, wherein the plurality of package power states includes an independent package power state; obtaining a respective independent power state measure for each of the processor packages, wherein the independent power state measure provides a measure of a percentage of time the processor package spends in the independent package power state; and adjusting an allocation of the plurality of groups of software threads across the plurality of processor packages based in part on the independent power state measures for the packages.

    Abstract translation: 描述了用于功率感知调度的系统和技术。 技术之一包括监视在物理机器上执行的多组软件线程的执行,其中所述物理机器包括物理硬件平台,所述物理硬件平台包括具有多个封装电源状态的多个处理器封装,其中所述多个封装 电源状态包括独立的封装电源状态; 为每个处理器包获得相应的独立功率状态测量,其中所述独立功率状态测量提供所述处理器封装在所述独立封装功率状态下花费的时间百分比的量度; 以及部分地基于用于所述包的独立功率状态测量来调整跨所述多个处理器包的所述多组软件线程的分配。

    Optimizing Virtual Machine Scheduling on Non-Uniform Cache Access (NUCA) Systems

    公开(公告)号:US20230026837A1

    公开(公告)日:2023-01-26

    申请号:US17384161

    申请日:2021-07-23

    Applicant: VMware, Inc.

    Abstract: Techniques for optimizing virtual machine (VM) scheduling on a non-uniform cache access (NUCA) system are provided. In one set of embodiments, a hypervisor of the NUCA system can partition the virtual CPUs of each VM running on the system into logical constructs referred to as last level cache (LLC) groups, where each LLC group is sized to match (or at least not exceed) the LLC domain size of the system. The hypervisor can then place/load balance the virtual CPUs of each VM on the system’s cores in a manner that attempts to keep virtual CPUs which are part of the same LLC group within the same LLC domain, subject to various factors such as compute load, cache contention, and so on.

    Techniques for Concurrently Supporting Virtual NUMA and CPU/Memory Hot-Add in a Virtual Machine

    公开(公告)号:US20220075637A1

    公开(公告)日:2022-03-10

    申请号:US17013277

    申请日:2020-09-04

    Applicant: VMware, Inc.

    Abstract: Techniques for concurrently supporting virtual non-uniform memory access (virtual NUMA) and CPU/memory hot-add in a virtual machine (VM) are provided. In one set of embodiments, a hypervisor of a host system can compute a node size for a virtual NUMA topology of the VM, where the node size indicates a maximum number of virtual central processing units (vCPUs) and a maximum amount of memory to be included in each virtual NUMA node. The hypervisor can further build and expose the virtual NUMA topology to the VM. Then, at a time of receiving a request to hot-add a new vCPU or memory region to the VM, the hypervisor can check whether all existing nodes in the virtual NUMA topology have reached the maximum number of vCPUs or maximum amount of memory, per the computed node size. If so, the hypervisor can create a new node with the new vCPU or memory region and add the new node to the virtual NUMA topology.

    WORKLOAD PLACEMENT USING CONFLICT COST

    公开(公告)号:US20210019159A1

    公开(公告)日:2021-01-21

    申请号:US16511308

    申请日:2019-07-15

    Applicant: VMware, Inc.

    Abstract: Disclosed are various embodiments that utilize conflict cost for workload placements in datacenter environments. In some examples, a protected memory level is identified for a computing environment. The computing environment includes a number of processor resources. Incompatible processor workloads are prohibited from concurrently executing on parallel processor resources. Parallel processor resources share memory at the protected memory level. A number of conflict costs are determined for a processor workload. Each conflict cost is determined based on a measure of compatibility between the processor workload and a parallel processor resource that shares a particular memory with the respective processor resource. The processor workload is assigned to execute on a processor resource associated with a minimum conflict cost.

    RESOURCE OPTIMIZATION FOR VIRTUALIZATION ENVIRONMENTS

    公开(公告)号:US20200065126A1

    公开(公告)日:2020-02-27

    申请号:US16111582

    申请日:2018-08-24

    Applicant: VMware, Inc.

    Abstract: Disclosed are various embodiments for distributing the load of a plurality of virtual machines across a plurality of hosts. A potential new host for a virtual machine executing on a current host is identified. A gain rate associated with migration of the virtual machine from the current host to the potential new host is calculated. A gain duration associated with migration of the virtual machine from the current host to the potential new host is also calculated. A migration cost for migration of the virtual machine from the current host to the potential new host, the migration cost being based on the gain rate and the gain duration is determined. It is then determined whether the migration cost is below a predefined threshold cost. Migration of the virtual machine from the current host to the optimal host is initiated in response to a determination that the migration cost is below the predefined threshold.

    PERFORMANCE MODELING FOR VIRTUALIZATION ENVIRONMENTS

    公开(公告)号:US20200065125A1

    公开(公告)日:2020-02-27

    申请号:US16111397

    申请日:2018-08-24

    Applicant: VMware, Inc.

    Abstract: Disclosed are various embodiments for distributing the load of a plurality of virtual machines across a plurality of hosts. A first plurality of efficiency ratings for a current host of a virtual machine are calculated. A second plurality of efficiency ratings for a potential new host of the virtual machine are also calculated. The first plurality of efficiency ratings are compared to the second plurality of efficiency ratings to determine that the potential new host for the virtual machine is an optimal host for the virtual machine. Then migration of the virtual machine from the current host to the optimal host is initiated.

    ADAPTIVE CPU NUMA SCHEDULING
    28.
    发明申请

    公开(公告)号:US20190205155A1

    公开(公告)日:2019-07-04

    申请号:US16292502

    申请日:2019-03-05

    Applicant: VMware, Inc.

    Abstract: Systems and methods for performing selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors are provided. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. The systems and methods further provide for monitoring system characteristics and rescheduling the vCPUs when other placements provide improved performance and efficiency.

Patent Agency Ranking