Abstract:
In order to facilitate efficient and scalable lookup of current hop limits of transmitted packets, a communications device embeds hop limit values along with other connection parameters in a connection data structure. To transmit a packet for a particular connection, the communications device retrieves the data structure for the particular connection and applies the hop limit value embedded in the data structure to the packet for transmission. To keep track of the hop limits being embedded in different data structures of different connections, the communications device use a binary search in which each node of the search tree correspond to a different connection. The communications device maintains one such search tree per communications interface.
Abstract:
A host computer has a plurality of virtual machines executing therein under the control of a hypervisor, where the host also includes a physical network interface controller (NIC). An interrupt controller detects an interrupt generated by the physical NIC, where the interrupt corresponds to a virtual machine. If the virtual machine has exclusive affinity to one or more physical central processing units (CPUs), then the interrupt is forwarded to the virtual machine. If the virtual machine does not have exclusive affinity, then a process in the hypervisor is invoked to forward the interrupt to the virtual machine.
Abstract:
Exemplary methods, apparatuses, and systems include receiving time series data for each of a plurality of performance metrics. The time series data is sorted into buckets based upon an amount of variation of time series data values for each performance metric. The time series data in each bucket is divided into first and second clusters of time series data points. The bucket having the greatest distance between clusters is used to determine a performance metric having a greatest distance between clusters. The performance metric having the greatest distance between clusters is reported as a potential root cause of a performance issue.
Abstract:
A method of optimizing network processing in a system comprising a physical host and a set of physical network interface controllers (PNICs) is provided. The physical host includes a forwarding element. The method includes determining that a set of conditions is satisfied to bypass the forwarding element for exchanging packets between a particular data compute node (DCN) and a particular PNIC. The set of conditions includes the particular DCN being the only DCN connected to the forwarding element and the particular PNIC being the only PNIC connected to the forwarding element. The method exchanges packets between the particular DCN and the particular PNIC bypassing the forwarding element. The method determines that at least one condition in said set of conditions is not satisfied. The method utilizes the forwarding element to exchange packets between the particular DCN and the particular PNIC.
Abstract:
Some embodiments provide a queue management system that efficiently and dynamically manages multiple queues that process traffic to and from multiple virtual machines (VMs) executing on a host. This system manages the queues by (1) breaking up the queues into different priority pools with the higher priority pools reserved for particular types of traffic or VM (e.g., traffic for VMs that need low latency), (2) dynamically adjusting the number of queues in each pool (i.e., dynamically adjusting the size of the pools), (3) dynamically reassigning a VM to a new queue based on one or more optimization criteria (e.g., criteria relating to the underutilization or overutilization of the queue).
Abstract:
Techniques disclosed herein provide an approach for using receive side scaling (RSS) offloads from a physical network interface controller (PNIC) to improve the performance of a virtual network interface controller (VNIC). In one embodiment, the PNIC is configured to write hash values it computes for RSS purposes to packets themselves. The VNIC then reads the hash values from the packets and places the packets into VNIC RSS queues, which are processed by respective CPUs, based on the hash values. CPU overhead is thereby reduced, as RSS processing by the VNIC no longer requires computing hash values. In another embodiment in which the number of PNIC RSS queues and VNIC RSS queues are identical, the VNIC may map packets from PNIC RSS queues to VNIC RSS queues using the PNIC RSS queue ID numbers, which also does not require the computing RSS hash values.
Abstract:
Some embodiments provide a queue management system that efficiently and dynamically manages multiple queues that process traffic to and from multiple virtual machines (VMs) executing on a host. This system manages the queues by (1) breaking up the queues into different priority pools with the higher priority pools reserved for particular types of traffic or VM (e.g., traffic for VMs that need low latency), (2) dynamically adjusting the number of queues in each pool (i.e., dynamically adjusting the size of the pools), (3) dynamically reassigning a VM to a new queue based on one or more optimization criteria (e.g., criteria relating to the underutilization or overutilization of the queue).
Abstract:
A method of optimizing network processing in a system comprising a physical host and a set of physical network interface controllers (PNICs) is provided. The physical host includes a forwarding element. The method includes determining that a set of conditions is satisfied to bypass the forwarding element for exchanging packets between a particular data compute node (DCN) and a particular PNIC. The set of conditions includes the particular DCN being the only DCN connected to the forwarding element and the particular PNIC being the only PNIC connected to the forwarding element. The method exchanges packets between the particular DCN and the particular PNIC bypassing the forwarding element. The method determines that at least one condition in said set of conditions is not satisfied. The method utilizes the forwarding element to exchange packets between the particular DCN and the particular PNIC.
Abstract:
A method of high packet rate network processing in a system that includes a physical host and a set of physical network interface controllers (PNICs). The physical host is hosting a set of data compute nodes (DCNs). Each DCN includes a virtual network interface controller (VNIC) for communicating with one or more PNICs to exchange packets. The method determines that a rate of packets received from a particular DCN at the VNIC of the particular DCN exceeds a predetermined threshold. The method performs polling to determine the availability of packets received at the VNIC from the particular DCN while the rate of packets received from the DCN at the VNIC is exceeding the threshold. The method utilizes interrupts to determine the availability of packets received at the VNIC from the particular DCN while the rate of packets received from the DCN at the VNIC does not exceed the threshold.
Abstract:
A method of high packet rate network processing in a system that includes a physical host and a set of physical network interface controllers (PNICs). The physical host is hosting a set of data compute nodes (DCNs). Each DCN includes a virtual network interface controller (VNIC) for communicating with one or more PNICs to exchange packets. The method determines that a rate of packets received from a particular DCN at the VNIC of the particular DCN exceeds a predetermined threshold. The method performs polling to determine the availability of packets received at the VNIC from the particular DCN while the rate of packets received from the DCN at the VNIC is exceeding the threshold. The method utilizes interrupts to determine the availability of packets received at the VNIC from the particular DCN while the rate of packets received from the DCN at the VNIC does not exceed the threshold.