Using cache coherent FPGAS to accelerate live migration of virtual machines

    公开(公告)号:US11099871B2

    公开(公告)日:2021-08-24

    申请号:US16048182

    申请日:2018-07-27

    Applicant: VMware, Inc.

    Abstract: A virtual machine running on a source host is live migrated to a destination host. The source host includes a first processing node with a first processing hardware and a first memory, and a second processing node with a second processing hardware and a second memory. While the virtual machine is running on the first processing hardware, the second processing hardware tracks cache lines of the first processing hardware that become dirty as a result of write operations performed on one or more memory pages of the virtual machine. The dirty cache lines are copied to the destination host in units of a cache line or groups of cache lines.

    Accelerating replication of page tables for multi-socket machines

    公开(公告)号:US10929295B2

    公开(公告)日:2021-02-23

    申请号:US16255432

    申请日:2019-01-23

    Applicant: VMware, Inc.

    Abstract: Described herein is a method for tracking changes made by an application. Embodiments include determining, by a processor, a write-back of a cache line from a hardware unit associated with a socket of a plurality of sockets to a page table entry of a page table in a memory location associated with the processor. Embodiments include adding, by the processor, the cache line to a list of dirty cache lines. Embodiments include, for each respective cache line in the list of dirty cache lines, identifying, by the processor, a memory location associated with a respective socket of the plurality of sockets corresponding to the respective cache line and updating, by the processor, an entry of a page table replica at the memory location based on the respective cache line.

    Using cache coherent FPGAS to accelerate remote access

    公开(公告)号:US10761984B2

    公开(公告)日:2020-09-01

    申请号:US16048186

    申请日:2018-07-27

    Applicant: VMware, Inc.

    Abstract: Disclosed are embodiments for running an application on a local processor when the application is dependent on pages not locally present but contained in a remote host. The system is informed that the pages on which the application depends are locally present. While running, the application encounters a cache miss and a cache line satisfying the miss from the remote host is obtained and provided to the application. Alternatively, the page containing the cache line satisfying the miss is obtained and the portion of the page not including the cache line is stored locally while the cache line is provided to the application. The cache miss is discovered by monitoring coherence events on a coherence interconnect connected to the local processor. In some embodiments, the cache misses are tracked and provide a way to predict a set of pages to be pre-fetched in anticipation of the next cache misses.

    Provisioning of computer systems using virtual machines

    公开(公告)号:US10248445B2

    公开(公告)日:2019-04-02

    申请号:US14716746

    申请日:2015-05-19

    Applicant: VMware, Inc.

    Abstract: A provisioning server automatically configures a virtual machine (VM) according to user specifications and then deploys the VM on a physical host. The user may either choose from a list of pre-configured, ready-to-deploy VMs, or he may select which hardware, operating system and application(s) he would like the VM to have. The provisioning server then configures the VM accordingly, if the desired configuration is available, or it applies heuristics to configure a VM that best matches the user's request if it isn't. The invention also includes mechanisms for monitoring the status of VMs and hosts, for migrating VMs between hosts, and for creating a network of VMs.

    Cryptographic multi-shadowing with integrity verification

    公开(公告)号:US10169253B2

    公开(公告)日:2019-01-01

    申请号:US15682056

    申请日:2017-08-21

    Applicant: VMware, Inc.

    Abstract: A virtual-machine-based system that may protect the privacy and integrity of application data, even in the event of a total operating system compromise. An application is presented with a normal view of its resources, but the operating system is presented with an encrypted view. This allows the operating system to carry out the complex task of managing an application's resources, without allowing it to read or modify them. Different views of “physical” memory are presented, depending on a context performing the access. An additional dimension of protection beyond the hierarchical protection domains implemented by traditional operating systems and processors is provided.

    Parallel context switching for interrupt handling

    公开(公告)号:US11726811B2

    公开(公告)日:2023-08-15

    申请号:US17351488

    申请日:2021-06-18

    Applicant: VMware, Inc.

    CPC classification number: G06F9/4812 G06F9/461 G06F9/545

    Abstract: Disclosed are various embodiments for decreasing the amount of time spent processing interrupts by switching contexts in parallel with processing an interrupt. An interrupt request can be received during execution of a process in a less privileged user mode. Then, the current state of the process can be saved. Next, a switch from the less privileged mode to a more privileged mode can be made. The interrupt request is then processed while in the more privileged mode. Subsequently or in parallel, and possibly prior to completion of the processing the interrupt request, another switch from the more privileged mode to the less privileged mode can be made.

    High throughput memory page reclamation

    公开(公告)号:US11650747B2

    公开(公告)日:2023-05-16

    申请号:US17344514

    申请日:2021-06-10

    Applicant: VMware, Inc.

    CPC classification number: G06F3/064 G06F3/0604 G06F3/0679

    Abstract: Disclosed are various embodiments for high throughput reclamation of pages in memory. A first plurality of pages in a memory of the computing device are identified to reclaim. In addition, a second plurality of pages in the memory of the computing device are identified to reclaim. The first plurality of pages are prepared for storage on a swap device of the computing device. Then, a write request is submitted to a swap device to store the first plurality of pages. After submission of the write request, the second plurality of pages are prepared for storage on the swap device while the swap device completes the write request.

Patent Agency Ranking