PROCESSING SYSTEM WITH SELECTIVE PRIORITY-BASED TWO-LEVEL BINNING

    公开(公告)号:WO2022104082A1

    公开(公告)日:2022-05-19

    申请号:PCT/US2021/059172

    申请日:2021-11-12

    Abstract: Systems and methods related to priority-based and performance-based selection of a render mode, such as a two-level binning mode, in which to execute workloads with a graphics processing unit (GPU) [102] of a system [100] are provided. A user mode driver (UMD) [110] or kernel mode driver (KMD) [112] executed at a central processing unit (CPU) [104] configures low and medium priority workloads to be executed in a two-level binning mode and selects a binning mode for high priority workloads based on whether performance heuristics indicate that one or more binning conditions or override conditions have been met. High priority workloads are maintained in a high priority queue, while low and medium priority workloads are maintained in a low/medium priority queue, such that execution of low and medium priority workloads at the GPU can be preempted in favor of executing high priority workloads.

    SYSTEMS AND METHODS FOR DISTRIBUTED RENDERING USING TWO-LEVEL BINNING

    公开(公告)号:WO2022146928A1

    公开(公告)日:2022-07-07

    申请号:PCT/US2021/065230

    申请日:2021-12-27

    Abstract: Systems (100) and methods (300) for distributed rendering using two-level binning inlcude processing primitives (208) of a frame (202) to be rendered at a first graphics processing unit (GPU) chiplet (106-1) in a set of GPU chiplets (106) to generate visibility information (408) of primitives for each coarse bin (204, 510, 512, 514, 516, 518) and providing the visibility information (408) to the other GPU chiplets in the set of GPU chiplets (106). Each coarse bin (204, 510, 512, 514, 516, 518) is assigned to one of the GPU chiplets of the set of GPU chiplets (106) and rendered at the assigned GPU chiplet (106) based on the corresponding visibility information (408).

    TEXTURE RESIDENCY CHECKS USING COMPRESSION METADATA

    公开(公告)号:WO2019040630A1

    公开(公告)日:2019-02-28

    申请号:PCT/US2018/047539

    申请日:2018-08-22

    Abstract: A pipeline is configured to access a memory that stores a texture block and metadata that encodes compression parameters of the texture block and a residency status of the texture block. A processor requests access to the metadata in conjunction with requesting data in the texture block to perform a shading operation. The pipeline selectively returns the data in the texture block to the processor depending on whether the metadata indicates that the texture block is resident in the memory. A cache can also be included to store a copy of the metadata that encodes the compression parameters of the texture block. The residency status and the metadata stored in the cache can be modified in response to requests to access the metadata stored in the cache.

    GRAPHICS PROCESSING UNIT WITH SELECTIVE TWO-LEVEL BINNING

    公开(公告)号:WO2022031957A1

    公开(公告)日:2022-02-10

    申请号:PCT/US2021/044720

    申请日:2021-08-05

    Abstract: Systems and methods related to run-time selection of a render mode in which to execute command buffers with a graphics processing unit (GPU) of a device based on performance data corresponding to the device are provided. A user mode driver (UMD) or kernel mode driver (KMD) executed at a central processing unit (CPU) selects a binning mode based on whether performance data that includes sensor data or performance counter data indicates that an associated binning condition or override condition has been met. The UMD or the KMD causes pending command buffers to be patched to execute in the selected binning mode based on whether the binning mode is enabled or disabled.

    GRAPHICS PROCESSING UNIT RENDER MODE SELECTION SYSTEM

    公开(公告)号:WO2021183545A1

    公开(公告)日:2021-09-16

    申请号:PCT/US2021/021550

    申请日:2021-03-09

    Abstract: A processor dynamically selects a render mode for each render pass of a frame based on the characteristics of the render pass. A software driver of the processor receives graphics operations from an application executing at the processor and converts the graphics operations into a command stream that is provided to the graphics pipeline. As the driver converts the graphics operations into the command stream, the driver analyzes each render pass of the frame to determine characteristics of the render passes, and selects a render mode for each render pass based on the characteristics of the render pass.

    WORKLOAD AWARE VIRTUAL PROCESSING UNITS
    9.
    发明申请

    公开(公告)号:WO2023004028A1

    公开(公告)日:2023-01-26

    申请号:PCT/US2022/037848

    申请日:2022-07-21

    Abstract: A processing unit [100] is configured differently based on an identified workload [200, 225], and each configuration of the processing unit is exposed to software (e.g., to a device driver [103]) as a different virtual processing unit [111. 112]. Using these techniques, a processing system is able to provide different configurations of the processing unit to support different types of workloads, thereby conserving system resources. Further, by exposing the different configurations as different virtual processing units, the processing system is able to use existing device drivers or other system infrastructure to implement the different processing unit configurations.

    SYNCHRONIZATION FREE CROSS PASS BINNING THROUGH SUBPASS INTERLEAVING

    公开(公告)号:WO2022203833A1

    公开(公告)日:2022-09-29

    申请号:PCT/US2022/018794

    申请日:2022-03-03

    Abstract: A method of tiled rendering is provided which comprises dividing a frame to be rendered, into a plurality of tiles, receiving commands to execute a plurality of subpasses of the tiles and interleaving execution of same subpasses of multiple tiles of the frame. Interleaving execution of same subpasses of multiple tiles comprises executing a previously ordered first subpass of a second tile between execution of the previously ordered first subpass of a first tile and execution of a subsequently ordered second subpass of the first tile. The interleaving is performed, for example, by executing the plurality of subpasses in an order different from the order in which the commands to execute the plurality of subpasses are stored and issued. Alternatively, interleaving is performed by executing one or more subpasses as skip operations such that the plurality of subpasses are executed in the same order.

Patent Agency Ranking