Abstract:
Apparatuses, systems, and techniques to determine metrics for paths connecting hardware components, select a plurality of groups of the hardware components based at least in part on the metrics, and perform at least a portion of a workload using a selected group of the plurality of groups.
Abstract:
Systems and methods are disclosed for throttling memory bandwidth accessed by virtual machines (VMs). A technique for dynamically throttling the virtual computer processing units (vCPUs) assigned to a VM (tenant) controls the memory access rate of the VM. When the memory is shared by multiple VMs in a cloud-computing environment, one VM increasing its memory access rate may cause another VM to suffer memory access starvation. This behavior violates the principle of VM isolation in cloud computing. In contrast to conventional systems, a software solution for dynamically throttling the vCPUs may be implemented within a hypervisor and is therefore portable across CPU families and doesn't require specialized server-class CPU capabilities or limit the system configuration.
Abstract:
Apparatuses, systems, and techniques to determine that a first group including first hardware components is compatible with a second group including second hardware components based at least on a first label associated with the first group and a second label associated with the second group, and cause at least one workload to be migrated from the first group to the second group based at least on determining the first and second groups are compatible with one another.
Abstract:
A computer system includes an operating system having a kernel and configured to launch a plurality of computing processes. The system also includes a plurality of graphics processing units (GPUs), a front-end driver module, and a plurality of back-end driver modules. The GPUs are configured to execute instructions on behalf of the computing processes subject to a GPU service request. The front-end driver module is loaded into the kernel and configured to receive the GPU service request from one of the computing processes. Each back-end driver module is associated with one or more of the GPUs and configured to receive the GPU service request from the front-end driver module and pass the GPU service request to an associated GPU.
Abstract:
A computer system includes an operating system having a kernel and configured to launch a plurality of computing processes. The system also includes a plurality of graphics processing units (GPUs), a front-end driver module, and a plurality of back-end driver modules. The GPUs are configured to execute instructions on behalf of the computing processes subject to a GPU service request. The front-end driver module is loaded into the kernel and configured to receive the GPU service request from one of the computing processes. Each back-end driver module is associated with one or more of the GPUs and configured to receive the GPU service request from the front-end driver module and pass the GPU service request to an associated GPU.
Abstract:
Systems and methods are disclosed for throttling memory bandwidth accessed by virtual machines (VMs). A technique for dynamically throttling the virtual computer processing units (vCPUs) assigned to a VM (tenant) controls the memory access rate of the VM. When the memory is shared by multiple VMs in a cloud-computing environment, one VM increasing its memory access rate may cause another VM to suffer memory access starvation. This behavior violates the principle of VM isolation in cloud computing. In contrast to conventional systems, a software solution for dynamically throttling the vCPUs may be implemented within a hypervisor and is therefore portable across CPU families and doesn't require specialized server-class CPU capabilities or limit the system configuration.
Abstract:
A device for processing graphics data includes a plurality of graphics processing units. Each graphics processing unit may correspond to a virtualized operating system. Each graphics processing unit may include a configuration register indicating a 3D class code and a command register indicating that I/O cycle decoding is disabled. The device may be configured to transmit a configuration register value to a virtualized operating system indicating a VGA-compatible class code. The device may be configured to transmit a command register value to the virtualized operating system that indicates that I/O cycle decoding is enabled. In this manner, legacy bus architecture of the device may not limit the number of graphics processing units deployed in the device.
Abstract:
User inputs are received from end user devices. The user inputs are associated with applications executing in parallel on a computer system. Responsive to the user inputs, data is generated using a graphics processing unit (GPU) configured as multiple virtual GPUs that are concurrently utilized by the applications. The data is then directed to the proper end user devices for display.