摘要:
Embodiments of systems, apparatuses, and methods for a circular buffer in a redundant virtualization environment are disclosed. In one embodiment, an apparatus includes a head indicator storage location, an outgoing tail indicator storage location, a buffer tail storage location, and fetch hardware. The head indicator, outgoing tail indicators, and buffer tail indicators are to indicate a head, outgoing tail, and buffer tail, respectively, of a circular buffer. The fetch hardware is to fetch from the head of the circular buffer and advance the head no further than the outgoing tail. The buffer tail is to be filled by software and advanced no further than the head.
摘要:
An apparatus and method are described for implementing memory management in a graphics processing system. For example, one embodiment of an apparatus comprises: a first plurality of graphics processing resources to execute graphics commands and process graphics data; a first memory management unit (MMU) to communicatively couple the first plurality of graphics processing resources to a system-level MMU to access a system memory; a second plurality of graphics processing resources to execute graphics commands and process graphics data; a second MMU to communicatively couple the second plurality of graphics processing resources to the first MMU; wherein the first MMU is configured as a master MMU having a direct connection to the system-level MMU and the second MMU comprises a slave MMU configured to send memory transactions to the first MMU, the first MMU either servicing a memory transaction or sending the memory transaction to the system-level MMU on behalf of the second MMU.
摘要:
Embodiments of the invention enable dynamic level boosting of operations across virtualization layers to enable efficient nested virtualization. Embodiments of the invention execute a first virtual machine monitor (VMM) to virtualize system hardware. A nested virtualization environment is created by executing a plurality of upper level VMMs via virtual machines (VMs). These upper level VMMs are used to execute an upper level virtualization layer including an operating system (OS).During operation of the above described nested virtualization environment, a privileged instruction issued from an OS is trapped and emulated via the respective upper level VMM (i.e., the VMM that creates the VM for that OS). Embodiments of the invention enable the emulation of the privileged instruction via a lower level VMM. In some embodiments, the emulated instruction is executed via the first VMM with little to no involvement of any intermediate virtualization layers residing between the first and upper level VMMs.
摘要:
In several embodiments, a graphics processor couples to a virtual machine monitor (VMM) to present a virtual graphics processor to one or more virtual machines. A mediator for the virtual graphics processor synchronously shadows modifications to a guest graphics translation table (GTT) of a virtual machine to a shadow GTT of the VMM using trap and emulate virtualization. If the mediator detects a frequency of modifications to the guest GTT that exceeds a threshold the mediator may then asynchronously shadow at least a portion of the guest GTT to the shadow GTT and rebuild the shadow GTT prior to submitting commands for the virtual graphics processor to the graphics processor.
摘要:
Generally, this disclosure describes systems (and methods) for moderating interrupts in a virtualization environment. An overflow interrupt interval is defined. The overflow interrupt interval is used for triggering activation of an inactive guest so that the guest may respond to a critical event. The guest, including a network application, may be active for a first time interval and inactive for a second time interval. A latency interrupt interval may be defined. The latency interrupt interval is configured for interrupt moderation when the network application associated with a packet flow is active, i.e., when the guest including the network application is active on a processor. Of course, many alternatives, variations, and modifications are possible without departing from this embodiment.
摘要:
Embodiments of the invention describe a DMA Remapping unit (DRU) to receive, from a virtual machine monitor (VMM), a hot-page swap (HPS) request, the HPS request to include a virtual address, in use by at least one virtual machine (VM), mapped to a first memory page location, and a second memory page location. The DRU further blocks DMA requests to addresses of memory being remapped until the HPS request is fulfilled, copies the content of the first memory page location to the second memory page location, and ramps the virtual address from the first memory page location to the second memory page location.
摘要:
An embodiment may include circuitry to be comprised at least in part in a first host, and being enabled, when the circuitry is in a first mode of operation, to modify, at least in part, first information maintained, at least in part, by the circuitry and associated, at least in part, with at least one operational state. The circuitry may be disabled from initiating modification to the first information when the circuitry is in a second mode. The circuitry may enter the second mode in response to at least one command. When in the second mode, the circuitry may (1) copy, at least in part, the first information to at least one memory region, (2) replace, at least in part, the first information with second information, and (3) enter at least another operational state associated, at least in part, with the second information.
摘要:
Examples may include a remapping of sessions for a multi-threaded application that may be executed at a server or a client coupled to the server via a plurality of transmit control protocol (TCP) connections. Sessions may be remapped such that the multi-threaded application may expect to route sessions through a same TCP connection but the sessions are actually outputted via separate TCP connections.
摘要:
Generally, this disclosure describes systems (and methods) for moderating interrupts in a virtualization environment. An overflow interrupt interval is defined. The overflow interrupt interval is used for triggering activation of an inactive guest so that the guest may respond to a critical event. The guest, including a network application, may be active for a first time interval and inactive for a second time interval. A latency interrupt interval may be defined. The latency interrupt interval is configured for interrupt moderation when the network application associated with a packet flow is active, i.e., when the guest including the network application is active on a processor. Of course, many alternatives, variations, and modifications are possible without departing from this embodiment.