METHODS AND APPARATUS TO PROCESS COMMANDS FROM VIRTUAL MACHINES

    公开(公告)号:US20220326979A1

    公开(公告)日:2022-10-13

    申请号:US17845752

    申请日:2022-06-21

    Abstract: A disclosed example includes accessing, by a backend block service driver in an input/output virtual machine executing on one or more processors, a first command submitted to a buffer by a paravirtualized input/output frontend block driver executing in a guest virtual machine; generating, by the backend block service driver, a translated command based on the first command by translating a virtual parameter of the first command to a physical parameter associated with a physical resource; submitting, by the backend block service driver, the translated command to an input/output queue to be processed by the physical resource based on the physical parameter; and submitting, by the backend block service driver, a completion status entry to the buffer, the completion status entry indicative of completion of a direct memory access operation that copies data between the physical resource and a guest memory buffer corresponding to the guest virtual machine.

    Process address space identifier virtualization using hardware paging hint

    公开(公告)号:US11461100B2

    公开(公告)日:2022-10-04

    申请号:US17253053

    申请日:2018-12-21

    Abstract: Process address space identifier virtualization uses hardware paging hint. The processing device (100) comprising: a processing core (110); and a translation circuit coupled to the processing core, the translation circuit to: receive a workload instruction from a guest application being executed by the processing device, the workload instruction comprising an untranslated guest process address space identifier (gPASID), a workload for an input/output (I/O) target device, and an identifier of a submission register on the I/O target device (410), access a paging data structure (PDS) associated with the guest application to retrieve a page table entry corresponding to the gPASID and the identifier of the submission register (420), determine a value of an I/O hint bit of the page table entry corresponding to the gPASID and the identifier of the submission register (430), responsive to determining that the I/O hint bit is enabled, keep the untranslated gPASID in the workload instruction (440), and provide the workload instruction to a work queue of the I/O target device (450).

    Method and apparatus to use dram as a cache for slow byte-addressible memory for efficient cloud applications

    公开(公告)号:US11307985B2

    公开(公告)日:2022-04-19

    申请号:US17255886

    申请日:2018-09-28

    Abstract: Various embodiments are generally directed to virtualized systems. A first guest memory page may be identified based at least in part on a number of accesses to a page table entry for the first guest memory page in a page table by an application executing in a virtual machine (VM) on the processor, the first guest memory page corresponding to a first byte-addressable memory. The execution of the VM and the application on the processor may be paused. The first guest memory page may be migrated to a target memory page in a second byte-addressable memory, the target memory page comprising one of a target host memory page and a target guest memory page, the second byte-addressable memory having an access speed faster than an access speed of the first byte-addressable memory.

    Mechanism for providing multiple screen regions on a high resolution display

    公开(公告)号:US11069021B2

    公开(公告)日:2021-07-20

    申请号:US16305865

    申请日:2016-07-02

    Inventor: Dingyu Pei Kun Tian

    Abstract: A display engine comprises a surface splitter to generate frame buffer coordinates to split frame buffer data into a plurality of regions, each corresponding to a frame buffer coordinate, a pipeline, including a plurality of pipes, to receive the frame buffer coordinates, wherein two or more of the plurality of pipes operate in parallel to process frame buffer data corresponding to a region of the frame buffer identified by the frame buffer coordinates, a first of a plurality of transcoders to merge the frame buffer data from each of the two or more pipes into an output signal whenever the display engine is operating in a multi-pipe collaboration mode and a multiplexer (Mux) and multi-stream arbiter to control an order of transmission of the frame buffer data from each of the two or more pipes to the first transcoder based on a fetch order received from the surface splitter.

    UNIFIED ADDRESS TRANSLATION FOR VIRTUALIZATION OF INPUT/OUTPUT DEVICES

    公开(公告)号:US20210173790A1

    公开(公告)日:2021-06-10

    申请号:US16651786

    申请日:2017-12-29

    Abstract: Embodiments of apparatuses, methods, and systems for unified address translation for virtualization of input/output devices are described. In an embodiment, an apparatus includes first circuitry to use at least an identifier of a device to locate a context entry and second circuitry to use at least a process address space identifier (PASID) to locate a PASID-entry. The context entry is to include at least one of a page-table pointer to a page-table translation structure and a PASID. The PASID-entry is to include at least one of a first-level page-table pointer to a first-level translation structure and a second-level page-table pointer to a second-level translation structure. The PASID is to be supplied by the device. At least one of the apparatus, the context entry, and the PASID entry is to include one or more control fields to indicate whether the first-level page-table pointer or the second-level page-table pointer is to be used.

    Apparatus and method for a hybrid layer of address mapping for a virtualized input/output (I/O) implementation

    公开(公告)号:US10983821B2

    公开(公告)日:2021-04-20

    申请号:US16328062

    申请日:2016-09-26

    Abstract: An apparatus and method are described for implementing a hybrid layer of address mapping for an IOMMU implementation. For example, one embodiment of a graphics processing apparatus comprises: virtualization circuitry to implement a virtualized execution environment in which a plurality of guest virtual machines (VMs) are to execute and share execution resources of the graphics processing apparatus; an input/output (I/O) memory management unit (IOMMU) to couple the VMs to one or more I/O devices; a hybrid layer address mapping (HLAM) module to combine entries from a per-process graphics translation table (PPGTT) with entries from a global graphics translation table (GGTT) into a first integrated page table, the first integrated page table mapping PPGTT guest page numbers (GPNs) to host page numbers (HPNs) and mapping GGTT virtual GPNs to HPNs; the HLAM to transform a GGTT GPN into a virtual GPN usable to access a corresponding HPN within the first integrated page table in response to a GGTT read/write operation generated by a first guest virtual machine (VM).

Patent Agency Ranking