Abstract:
Technologies for scheduling workload submissions for a graphics processing unit (GPU) in a virtualization environment include a GPU scheduler embodied in a computing device. The virtualization environment includes a number of different virtual machines that are configured with a native graphics driver. The GPU scheduler receives GPU commands from the different virtual machines, dynamically selects a scheduling policy, and schedules the GPU commands for processing by the GPU.
Abstract:
An apparatus, system, method, and machine-readable medium are disclosed. In one embodiment the apparatus is a network interface controller that includes one virtual function owned by a virtual machine present in the computer system. The controller includes a simple filtering agent that is associated with the first virtual function. The agent enforces simple filter rules for received network packets. The simple filter rules are capable of blocking the network packets from reaching the virtual machine. The apparatus also includes another virtual function that is owned by a virtual machine monitor present in the computer system. The controller also includes a side bounce filtering agent to forward the first network packet to the second virtual function if the first packet is blocked by the at least one of the one or more simple filter rules.
Abstract:
Examples may include techniques to run one or more containers on a virtual machine (VM). Examples include cloning a first VM to result in a second VM. The cloned first VM may run at least a set of containers capable of separately executing one or more applications. In some examples, some cloned containers are stopped at either the first or second VMs to allow for at least some resources provisioned to support the first or second VMs to be reused or recycled at a hosting node. In other examples, the second VM is migrated from the hosting node to a destination hosting node to further enable resources to be reused or recycled at the hosting node.
Abstract:
Examples may include a remapping of sessions for a multi-threaded application that may be executed at a server or a client coupled to the server via a plurality of transmit control protocol (TCP) connections. Sessions may be remapped such that the multi-threaded application may expect to route sessions through a same TCP connection but the sessions are actually outputted via separate TCP connections.
Abstract:
Examples may include a remapping of sessions for a multi-threaded application that may be executed at a server or a client coupled to the server via a plurality of transmit control protocol (TCP) connections. Sessions may be remapped such that the multi-threaded application may expect to route sessions through a same TCP connection but the sessions are actually outputted via separate TCP connections.
Abstract:
Generally, this disclosure describes systems (and methods) of moderating interrupts in a virtualization environment. An overflow interval is defined. The overflow interrupt interval is used to trigger activation of an inactive guest so that the guest may respond to a critical event. The guest, including a network application, may be active for a first time interval and inactive for a second time interval. A latency interrupt interval may be defined. The latency interrupt interval is configured for interrupt moderation when the network application associated with a packet flow is active, i.e., when the guest including the network application is active on a processor. Of course, many alternatives, variations, and modifications are possible without departing from this embodiment.