Abstract:
Technologies for scheduling workload submissions for a graphics processing unit (GPU) in a virtualization environment include a GPU scheduler embodied in a computing device. The virtualization environment includes a number of different virtual machines that are configured with a native graphics driver. The GPU scheduler receives GPU commands from the different virtual machines, dynamically selects a scheduling policy, and schedules the GPU commands for processing by the GPU.
Abstract:
A memory device comprises an input interface configured to receive an erase request indicating a memory portion to be erased and control circuitry configured to trigger erasing information stored by memory cells of at least a part of the indicated memory portion of the memory device by writing a predefined pattern into the memory cells during an automatic refresh cycle.
Abstract:
Various embodiments are generally directed to virtualized systems. A first guest memory page may be identified based at least in part on a number of accesses to a page table entry for the first guest memory page in a page table by an application executing in a virtual machine (VM) on the processor, the first guest memory page corresponding to a first byte-addressable memory. The execution of the VM and the application on the processor may be paused. The first guest memory page may be migrated to a target memory page in a second byte-addressable memory, the target memory page comprising one of a target host memory page and a target guest memory page, the second byte-addressable memory having an access speed faster than an access speed of the first byte-addressable memory.
Abstract:
Systems and methods for container access to graphics processing unit (GPU) resources are disclosed herein. In some embodiments, a computing system may include a physical GPU and kernel-mode driver circuitry, to communicatively couple with the physical GPU to create a plurality of emulated GPUs and a corresponding plurality of device nodes. Each device node may be associated with a single corresponding user-side container to enable communication between the user-side container and the corresponding emulated GPU. Other embodiments may be disclosed and/or claimed.
Abstract:
Various embodiments are generally directed to virtualized systems. A first guest memory page may be identified based at least in part on a number of accesses to a page table entry for the first guest memory page in a page table by an application executing in a virtual machine (VM) on the processor, the first guest memory page corresponding to a first byte-addressable memory. The execution of the VM and the application on the processor may be paused. The first guest memory page may be migrated to a target memory page in a second byte-addressable memory, the target memory page comprising one of a target host memory page and a target guest memory page, the second byte-addressable memory having an access speed faster than an access speed of the first byte-addressable memory.
Abstract:
Apparatuses, methods and storage medium associated with in-vehicle computing, are disclosed herein. In embodiments, an in-vehicle system computing platform having a hypervisor to host one or more virtual machines (VMs) includes a memory shrink manager, and a memory snapshot manager. The memory shrink manager is configured to orchestrate shrinking a memory footprint of one of the one or more VMs for a suspend process invoked in response to the computing platform being powered off. The memory snapshot manager is configured to save the shrunken memory footprint of the one VM to the persistent storage during the suspend process, and to reload a subset of the saved shrunken memory footprint during a resume process to resume the one VM from suspension to the persistent storage. The resume process is invoked in response to the computing platform being powered on, cold booted.
Abstract:
An apparatus and method are described for mediate pass through and shared memory page merging. For example, one embodiment of a method comprises: generating a page identifier (PI) for each of a set of guest memory pages, wherein equivalent PIs indicate that the corresponding memory pages are the same; upon detecting that a first guest memory page and a second guest memory page have PIs that are equal, merging the first and second guest memory pages into a single memory page; detecting that the first guest memory page is to be used for a direct memory access (DMA) operation; and responsively unmerging the first and second guest memory pages.
Abstract:
Examples may include techniques for virtual machine (VM) migration. Examples may include selecting a first VM from among a plurality of VM hosted by a source node for a first live migration to a destination node based on determined working set patterns and one or more policies.
Abstract:
Embodiments of graphics instruction instrumentor (“GII”) and a graphics profiler (“GP”) are described. The GII may facilitate profiling of execution of graphics instructions by one or more graphics processors. The GII may identify target graphics instructions for which execution profile information is desired. The GII may store instrumentation graphics instructions in a graphics instruction buffer. The instrumentation graphics instructions may facilitate the GP in collecting graphics profile information. For example, timestamp-storage instructions may be store timestamps before and after execution of the target graphics instructions. The GII also may store an interrupt-generation instruction to cause an interrupt to be sent to the GP so that the GP may begin collection of graphics profile data. The GII may store an event-wait instruction to pause the graphics processors until an event is received. Other embodiments may be described and claimed.
Abstract:
Multiple operating systems are supported on a computing device by disk virtualization technologies that allow switching between a native operating system and a virtualized guest operating system without performing a format conversion of the native operating system image, which is stored in a partition of a physical data storage device. The disk virtualization technologies establish a virtual storage device in a manner that allows the guest operating system to directly access the partition of the physical storage device that contains the native operating system image.