Abstract:
Some embodiments of the invention provide a method for providing flow processing offload (FPO) for a host computer at a physical network interface card (pNIC) connected to the host computer. A set of compute nodes executing on the host computer are each associated with a set of interfaces that are each assigned a locally-unique virtual port identifier (VPID) by a flow processing and action generator. The pNIC includes a set of interfaces that are assigned physical port identifiers (PPIDs) by the pNIC. The method includes receiving a data message at an interface of the pNIC and matching the data message to a stored flow entry that specifies a destination using a VPID. The method also includes identifying, using the VPID, a PPID as a destination of the received data message by performing a lookup in a mapping table storing a set of VPIDs and a corresponding set of PPIDs and forwarding the data message to an interface of the pNIC associated with the identified PPID.
Abstract:
Some embodiments provide a method for monitoring the status of a network connection between first and second host computers. The method is performed in some embodiments by a tunnel monitor executing on the first host computer that also separately executes a machine, where the machine uses a tunnel to send and receive messages to and from the second host computer. The method establishes a liveness channel with the machine to iteratively determine whether the first machine is operational. The method further establishes a monitoring session with the second host computer to iteratively determine whether the tunnel is operational. When a determination is made through the liveness channel that the machine is no longer operational, the method terminates the monitoring session with the second host computer. When a determination is made that the tunnel is no longer operational, the method notifies the machine through the liveness channel.
Abstract:
A method for a sender side assisted flow classification is disclosed. In an embodiment, a method comprises detecting a packet by a network virtualization layer engine implemented in a hypervisor on a sender side of a virtualization computer system; and determining, by the network virtualization layer engine, whether the packet requires special processing. In response to determining that the packet requires special processing, a special processing flag is inserted in a certain field of an outer header of the packet; and the packet is forwarded toward a destination of the packet for a PNIC on a receiver side to process the packet.
Abstract:
Techniques for tracking, by a host system, virtual machine (VM) memory modified by a physical input/output (I/O) device that supports I/O virtualization are provided. In one embodiment, a hypervisor of the host system can receive a hardware interrupt from the physical I/O device, where the hardware interrupt indicates that a virtual function (VF) of the physical I/O device has completed a direct memory access (DMA) write to a guest memory space of a VM running on the host system. In response to the hardware interrupt, the hypervisor can invoke a function implemented by a physical function (PF) driver of the physical I/O device, where the function is configured to inspect the VF's state in order to identify memory portions modified by the DMA write. The hypervisor can then mark, in a hypervisor-level page table, one or more memory pages corresponding to the identified memory portions as dirty pages.
Abstract:
Techniques for tracking, by a host system, virtual machine (VM) memory modified by a physical input/output (I/O) device that supports I/O virtualization are provided. In one embodiment, a hypervisor of the host system can receive a hardware interrupt from the physical I/O device, where the hardware interrupt indicates that a virtual function (VF) of the physical I/O device has completed a direct memory access (DMA) write to a guest memory space of a VM running on the host system. In response to the hardware interrupt, the hypervisor can invoke a function implemented by a physical function (PF) driver of the physical I/O device, where the function is configured to inspect the VF's state in order to identify memory portions modified by the DMA write. The hypervisor can then mark, in a hypervisor-level page table, one or more memory pages corresponding to the identified memory portions as dirty pages.
Abstract:
Some embodiments of the invention provide a method for offloading one or more data message processing services from a machine executing on a host computer. The method is performed by the machine. The method uses a set of virtual resources allocated to the machine to perform a set of services for a first set of data messages belonging to a particular data message flow. The method determines that for a second set of data messages belonging to the particular data message flow, the set of services should be performed by a virtual network interface card (VNIC) that executes on the host computer and is attached to the machine. Based on the determination, the method directs the VNIC to perform the set of services for the second set of data messages. The VNIC uses resources of the host computer to perform the set of services for the second set of data messages.
Abstract:
The disclosure provides an approach for segmenting a user datagram protocol (UDP) packets. A method includes generating the UDP packet, containing UDP data, at a virtual computing instance (VCI) running on a host machine; sending the UDP packet from the VCI to a hypervisor running on the host machine; after sending the UDP packet to the hypervisor, segmenting the UDP packet into a plurality of UDP segments, wherein each of the plurality of UDP segments includes a portion of the UDP data and a UDP header; and transmitting the plurality of UDP segments, over a network, to a destination of the UDP packet.
Abstract:
Some embodiments of the invention provide a method for configuring a physical network card or physical network controller (pNIC) to provide flow processing offload (FPO) for a host computer connected to the pNIC. The host computers host a set of compute nodes in a virtual network. The set of compute nodes are each associated with a set of interfaces that are each assigned a locally-unique virtual port identifier (VPID) by a flow processing and action generator. The pNIC includes a set of interfaces that are assigned physical port identifiers (PPIDs) by the pNIC. The method includes providing the pNIC with a set of mappings between VPIDs and PPIDs. The method also includes sending updates to the mappings as compute nodes migrate, connect to different interfaces of the pNIC, are assigned different VPIDs, etc. In some embodiments, the flow processing and action generator executes on processing units of the host computer, while in other embodiments, the flow processing and action generator executes on a set of processing units of a pNIC that includes flow processing hardware and a set of programmable processing units.
Abstract:
Some embodiments of the invention provide a method of upgrading a firewall module executing on a host computer to process traffic sent to and from machines executing on the host computer. While a first version of the firewall module executes on the host computer to process the traffic to and from the machines, the method loads a second version of the firewall module alongside the first version of the firewall module. For each of multiple ports associated with machines executing on the host computer for which the firewall module processes traffic sent to and from the port, the method saves a runtime state of the first version that relates to the port, transfers association of a firewall filter associated with the port from the first version to the second version, and restores the saved runtime state for the port to the second version.
Abstract:
Described herein are systems, methods, and software to manage the identification of control packets in an encapsulation header. In one implementation, a computing system may receive a Geneve packet at a network interface and determine that the Geneve packet includes an Operations and Management (OAM) flag. Once the OAM flag is identified, the computing system can select a processing queue from a plurality of processing queues for a main processing system of the computing system based on the OAM flag and assign the Geneve packet to the processing queue.