摘要:
A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
摘要:
A method and apparatus to facilitate shared pointers in a heterogeneous platform. In one embodiment of the invention, the heterogeneous or non-homogeneous platform includes, but is not limited to, a central processing core or unit, a graphics processing core or unit, a digital signal processor, an interface module, and any other form of processing cores. The heterogeneous platform has logic to facilitate sharing of pointers to a location of a memory shared by the CPU and the GPU. By sharing pointers in the heterogeneous platform, the data or information sharing between different cores in the heterogeneous platform can be simplified.
摘要:
A method and system for shared virtual memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a system memory. A CPU virtual address space may be created, and the surface may be mapped to the CPU virtual address space within a CPU page table. The method also includes creating a GPU virtual address space equivalent to the CPU virtual address space, mapping the surface to the GPU virtual address space within a GPU page table, and pinning the surface.
摘要:
Load balancing may be achieved in heterogeneous computing environments by first evaluating the operating environment and workload within that environment. Then, if energy usage is a constraint, energy usage per task for each device may be evaluated for the identified workload and operating environments. Work is scheduled on the device that maximizes the performance metric of the heterogeneous computing environment.
摘要:
A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
摘要:
A method and system are described herein for an optimization technique on two aspects of thread scheduling and dispatch when the driver is allowed to pick the scheduling attributes. The present techniques rely on an enhanced GPGPU Walker hardware command and one dimensional local identification generation to maximize thread residency.
摘要:
A method and system for shared virtual memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a system memory. A CPU virtual address space may be created, and the surface may be mapped to the CPU virtual address space within a CPU page table. The method also includes creating a GPU virtual address space equivalent to the CPU virtual address space, mapping the surface to the GPU virtual address space within a GPU page table, and pinning the surface.
摘要:
A method and apparatus to facilitate shared pointers in a heterogeneous platform. In one embodiment of the invention, the heterogeneous or non-homogeneous platform includes, but is not limited to, a central processing core or unit, a graphics processing core or unit, a digital signal processor, an interface module, and any other form of processing cores. The heterogeneous platform has logic to facilitate sharing of pointers to a location of a memory shared by the CPU and the GPU. By sharing pointers in the heterogeneous platform, the data or information sharing between different cores in the heterogeneous platform can be simplified.
摘要:
Apparatuses, systems, and methods may sample a texture, manage a page fault, and/or switch a context associated with the page fault. A three-dimensional (3D) graphics pipeline may provide texture sample location data corresponding to a texture, wherein sampling of the texture is to be executed external to the 3D graphics pipeline. A compute pipeline may execute sampling of the texture utilizing the texture sample location data and provide texture sample result data corresponding to the texture, wherein the 3D graphics pipeline may composite a frame utilizing the texture sample result data. The compute pipeline may manage a page fault, wherein the page fault and/or management of the page fault may be hidden from a graphics application. In addition, the compute pipeline may switch a compute context associated with the page fault to allow a graphics task not associated with the page fault to be executed and/or to prevent a stall.
摘要:
According to some embodiments, a graphics processor may abort a workload without requiring changes to the kernel code compilation or intruding upon graphics processing unit execution. Instead, it is possible to only read the predicate state once before starting and once before restarting a workload that has been preempted because the user wishes to abort the work. This avoids the need to read from each execution unit, reducing the drain on memory bandwidth and increasing power and performance in some embodiments.