REGISTER PARTITION AND PROTECTION FOR VIRTUALIZED PROCESSING DEVICE

    公开(公告)号:US20190004840A1

    公开(公告)日:2019-01-03

    申请号:US15637810

    申请日:2017-06-29

    Abstract: A register protection mechanism for a virtualized accelerated processing device (“APD”) is disclosed. The mechanism protects registers of the accelerated processing device designated as physical-function-or-virtual-function registers (“PF-or-VF* registers”), which are single architectural instance registers that are shared among different functions that share the APD in a virtualization scheme whereby each function can maintain a different value in these registers. The protection mechanism for these registers comprises comparing the function associated with the memory address specified by a particular register access request to the “currently active” function for the APD and disallowing the register access request if a match does not occur.

    Method and apparatus for managing memory

    公开(公告)号:US12293092B2

    公开(公告)日:2025-05-06

    申请号:US18083306

    申请日:2022-12-16

    Abstract: A method and apparatus of managing memory includes storing a first memory page at a shared memory location in response to the first memory page including data shared between a first virtual machine and a second virtual machine. A second memory page is stored at a memory location unique to the first virtual machine in response to the second memory page including data unique to the first virtual machine. The first memory page is accessed by the first virtual machine and the second virtual machine, and the second memory page is accessed by the first virtual machine and not the second virtual machine.

    MULTIPLE PROCESSES SHARING GPU MEMORY OBJECTS

    公开(公告)号:US20240193016A1

    公开(公告)日:2024-06-13

    申请号:US18064170

    申请日:2022-12-09

    CPC classification number: G06F9/544 G06F12/023

    Abstract: An apparatus and method for efficiently executing multiple processes by reducing an amount of memory usage of the processes. In various implementations, a computing system includes a first processor and a second processor that support parallel data applications stored on a remote server that provides cloud computing services to multiple users. The first processor creates multiple processes, referred to as “instances” in parallel computing platforms, for a particular application as users request to execute the application. When the first processor detects a function call of the application within a particular instance, the first processor searches for shareable data objects to be used by the second processor when executing the first instance of the function call, and frees data storage allocated to data objects that are already shared by one or more instances. Therefore, an amount of memory allocated for the multiple instances of the application is reduced.

    VARYING FIRMWARE FOR VIRTUALIZED DEVICE

    公开(公告)号:US20220058048A1

    公开(公告)日:2022-02-24

    申请号:US17453341

    申请日:2021-11-02

    Abstract: A technique for varying firmware for different virtual functions in a virtualized device is provided. The virtualized device includes a hardware accelerator and a microcontroller that executes firmware. The virtualized device is virtualized in that the virtualized device performs work for different virtual functions (with different virtual functions associated with different virtual machines), each function getting a “time-slice” during which work is performed for that function. To vary the firmware, each time the virtualized device switches from performing work for a current virtual function to work for a subsequent virtual function, one or more microcontrollers of the virtualized device examines memory storing addresses for firmware for the subsequent virtual function and begins executing the firmware for that subsequent virtual function. The addresses for the firmware are provided by a corresponding virtual machine at configuration time.

    Multiple application cooperative frame-based GPU scheduling

    公开(公告)号:US11100604B2

    公开(公告)日:2021-08-24

    申请号:US16263709

    申请日:2019-01-31

    Abstract: Systems, apparatuses, and methods for scheduling jobs for multiple frame-based applications are disclosed. A computing system executes a plurality of frame-based applications for generating pixels for display. The applications convey signals to a scheduler to notify the scheduler of various events within a given frame being rendered. The scheduler adjusts the priorities of applications based on the signals received from the applications. The scheduler attempts to adjust priorities of applications and schedule jobs from these applications so as to minimize the perceived latency of each application. When an application has enqueued the last job for the current frame, the scheduler raises the priority of the application to high. This results in the scheduler attempting to schedule all remaining jobs for the application back-to-back. Once all jobs of the application have been completed, the priority of the application is reduced, permitting jobs of other applications to be executed.

    Maintaining visibility of virtual function in bus-alive, core-off state of graphics processing unit

    公开(公告)号:US10923082B2

    公开(公告)日:2021-02-16

    申请号:US16177064

    申请日:2018-10-31

    Abstract: A processing unit includes a processor core that implements a physical function that supports multiple virtual functions. The processing unit includes a bus interface that supports communication between an external bus and the physical and virtual functions implemented using the processor core. During a reset of the processing unit, power is interrupted to the processor core power to the bus interface is maintained. The bus interface responds to requests for the physical and virtual functions received over the external bus concurrently with the power interruption. The bus interface responds based on state information associated with the virtual function. Power is restored to the processor core in response to the reinitialization of the GPU. The bus interface stops responding to requests for the physical and virtual functions received over the bus interface in response to restoring the power to the processor core and forwards requests received over the external bus from the bus interface to the processor core.

    MULTIPLE APPLICATION COOPERATIVE FRAME-BASED GPU SCHEDULING

    公开(公告)号:US20200250787A1

    公开(公告)日:2020-08-06

    申请号:US16263709

    申请日:2019-01-31

    Abstract: Systems, apparatuses, and methods for scheduling jobs for multiple frame-based applications are disclosed. A computing system executes a plurality of frame-based applications for generating pixels for display. The applications convey signals to a scheduler to notify the scheduler of various events within a given frame being rendered. The scheduler adjusts the priorities of applications based on the signals received from the applications. The scheduler attempts to adjust priorities of applications and schedule jobs from these applications so as to minimize the perceived latency of each application. When an application has enqueued the last job for the current frame, the scheduler raises the priority of the application to high. This results in the scheduler attempting to schedule all remaining jobs for the application back-to-back. Once all jobs of the application have been completed, the priority of the application is reduced, permitting jobs of other applications to be executed.

    SYSTEMS AND METHODS FOR ENSURING PROCESSING UNIT HARDWARE STATE INTEGRITY IN LIVE MIGRATION

    公开(公告)号:US20250110930A1

    公开(公告)日:2025-04-03

    申请号:US18478895

    申请日:2023-09-29

    Abstract: A computer-implemented method for ensuring processing unit hardware state integrity in live migration can include participating as a source, by a processing unit, in a live migration procedure by injecting, into a live migration data package containing a state of the processing unit, a signature verifying the state. The method can additionally include participating as a target, by the processing unit, in an additional live migration procedure migrating an additional live migration data package containing an additional state of an additional processing unit by performing an integrity check based on an additional signature, in the additional live migration data package, verifying the additional state. Various other methods, systems, and computer-readable media are also disclosed.

Patent Agency Ranking