Abstract:
A method of managing remote direct memory access (RDMA) to a virtual computing instance includes suspending locally initiated RDMA operations of the virtual computing instance executing on a first host prior to a migration of the virtual computing instance to a second host. The first host includes a first hypervisor and the second host includes a second hypervisor. The method further includes requesting a peer to suspend remotely initiated RDMA operations that target the virtual computing instance through a first channel, establishing after the migration, a second channel between the peer and the second hypervisor that supports execution of the virtual computing instance on the second host, configuring a virtual object of the second hypervisor on the second host to use the second channel for the locally initiated RDMA operations, and requesting the peer to resume the remotely initiated RDMA operations using the second channel.
Abstract:
The current document is directed to methods and systems that provide remote direct memory access (“RDMA”) to applications running within execution environments provided by guest operating systems and virtual machines above a virtualization layer. In one implementation, RDMA is accessed by application programs within virtual machines through a paravirtual interface that includes a virtual RDMA driver that transmits RDMA requests through a communications interface to a virtual RDMA endpoint in the virtualization layer.
Abstract:
Latency reduction for direct memory access operations involving address translation is disclosed. Example methods disclosed herein to perform direct memory access (DMA) operations include initializing a ring of descriptors, the descriptors to index respective buffers for storing received data in a first memory. Such example methods also include causing prefetching of a first address translation associated with a second descriptor in the ring of descriptors to be performed after a first DMA operation is performed to store first received data in a first buffer indexed by a first descriptor in the ring of descriptors and before second received data to be stored in the first memory is received, the first address translation being associated with a second DMA operation for storing the second received data in the first memory.
Abstract:
The current document is directed to methods and systems that provide remote direct memory access (“RDMA”) to applications running within execution environments provided by guest operating systems and virtual machines above a virtualization layer. In one implementation, RDMA is accessed by application programs within virtual machines through a paravirtual interface that includes a virtual RDMA driver that transmits RDMA requests through a communications interface to a virtual RDMA endpoint in the virtualization layer.
Abstract:
Latency reduction for direct memory access operations involving address translation is disclosed. Example methods disclosed herein to perform direct memory access (DMA) operations include initializing a ring of descriptors, the descriptors to index respective buffers for storing, in a first memory, data to be transmitted. Such example methods also include causing prefetching of a first address translation associated with a second descriptor in the ring of descriptors to be performed after a first DMA operation is performed to retrieve, for transmission, first data from a first buffer indexed by a first descriptor in the ring of descriptors and before second data is determined to be ready for transmission, the first address translation being associated with a second DMA operation for retrieving the second data from the first memory.
Abstract:
Techniques for tracking, by a host system, virtual machine (VM) memory modified by a physical input/output (I/O) device that supports I/O virtualization are provided. In one embodiment, a hypervisor of the host system can receive a hardware interrupt from the physical I/O device, where the hardware interrupt indicates that a virtual function (VF) of the physical I/O device has completed a direct memory access (DMA) write to a guest memory space of a VM running on the host system. In response to the hardware interrupt, the hypervisor can invoke a function implemented by a physical function (PF) driver of the physical I/O device, where the function is configured to inspect the VF's state in order to identify memory portions modified by the DMA write. The hypervisor can then mark, in a hypervisor-level page table, one or more memory pages corresponding to the identified memory portions as dirty pages.
Abstract:
Embodiments provide a network address translation (NAT) service for network devices. A network connection from at least one private network device to the NAT service is received and a network connection from at least one remote device to the NAT service is received. The private network device is positioned within a private network and the remote device is positioned within a public network. A network availability of the remote device is determined. If the remote device is unavailable or a network configuration setting associated with the remote device changes, the private network device is notified and a connection reset message is transmitted to the private network device.
Abstract:
Latency reduction for direct memory access operations involving address translation is disclosed. Example methods disclosed herein to perform direct memory access (DMA) operations include initializing a ring of descriptors, the descriptors to index respective buffers for storing received data in a first memory. Such example methods also include causing prefetching of a first address translation associated with a second descriptor in the ring of descriptors to be performed after a first DMA operation is performed to store first received data in a first buffer indexed by a first descriptor in the ring of descriptors and before second received data to be stored in the first memory is received, the first address translation being associated with a second DMA operation for storing the second received data in the first memory.
Abstract:
Techniques for tracking, by a host system, virtual machine (VM) memory modified by a physical input/output (I/O) device that supports I/O virtualization are provided. In one embodiment, a hypervisor of the host system can receive a hardware interrupt from the physical I/O device, where the hardware interrupt indicates that a virtual function (VF) of the physical I/O device has completed a direct memory access (DMA) write to a guest memory space of a VM running on the host system. In response to the hardware interrupt, the hypervisor can invoke a function implemented by a physical function (PF) driver of the physical I/O device, where the function is configured to inspect the VF's state in order to identify memory portions modified by the DMA write. The hypervisor can then mark, in a hypervisor-level page table, one or more memory pages corresponding to the identified memory portions as dirty pages.
Abstract:
Latency reduction for direct memory access operations involving address translation is disclosed. Example methods disclosed herein to perform direct memory access (DMA) operations include initializing a ring of descriptors, the descriptors to index respective buffers for storing, in a first memory, data to be transmitted. Such example methods also include causing prefetching of a first address translation associated with a second descriptor in the ring of descriptors to be performed after a first DMA operation is performed to retrieve, for transmission, first data from a first buffer indexed by a first descriptor in the ring of descriptors and before second data is determined to be ready for transmission, the first address translation being associated with a second DMA operation for retrieving the second data from the first memory.