VISIBILITY-BASED STATE UPDATES IN GRAPHICAL PROCESSING UNITS
    1.
    发明公开
    VISIBILITY-BASED STATE UPDATES IN GRAPHICAL PROCESSING UNITS 审中-公开
    SICHTBARKEITSBASIERTE ZUSTANDSAKTUALISIERUNGEN BEI GRAFISCHEN VERARBEITUNGSEINHEITEN

    公开(公告)号:EP2826024A1

    公开(公告)日:2015-01-21

    申请号:EP13709005.6

    申请日:2013-02-26

    IPC分类号: G06T15/00

    CPC分类号: G06T15/005 G06T11/40

    摘要: In general, techniques are described for visibility-based state updates in graphical processing units (GPUs). A device that renders image data comprising a memory configured to store state data and a GPU may implement the techniques. The GPU may be configured to perform a multi-pass rendering process to render an image from the image data. The GPU determines visibility information for a plurality of objects defined by the image data during a first pass of the multi-pass rendering process. The visibility information indicates whether each of the plurality of objects will be visible in the image rendered from the image data during a second pass of the multi-pass rendering process. The GPU then retrieves the state data from the memory for use by the second pass of the multi-pass rendering process in rendering the plurality of objects of the image data based on the visibility information.

    摘要翻译: 一般来说,描述图形处理单元(GPU)中基于可见性的状态更新的技术。 渲染包括被配置为存储状态数据的存储器的图像数据和GPU可以实现该技术的装置。 GPU可以被配置为执行多遍渲染处理以从图像数据渲染图像。 GPU在多遍渲染处理的第一次通过期间确定由图像数据定义的多个对象的可见性信息。 可见度信息指示在多遍渲染处理的第二次通过期间,在从图像数据呈现的图像中,多个对象中的每一个是否可见。 然后,GPU基于可见性信息从存储器中检索状态数据,以便通过第二遍的多遍渲染过程渲染图像数据的多个对象。

    DEFERRED PREEMPTION TECHNIQUES FOR SCHEDULING GRAPHICS PROCESSING UNIT COMMAND STREAMS
    2.
    发明公开
    DEFERRED PREEMPTION TECHNIQUES FOR SCHEDULING GRAPHICS PROCESSING UNIT COMMAND STREAMS 有权
    程序抢占后卫位置规划功能的图形处理单元的命令流

    公开(公告)号:EP2875486A1

    公开(公告)日:2015-05-27

    申请号:EP13735503.8

    申请日:2013-06-20

    IPC分类号: G06T1/20 G06F9/48

    摘要: This disclosure is directed to deferred preemption techniques for scheduling graphics processing unit (GPU) command streams for execution on a GPU. A host CPU is described that is configured to control a GPU to perform deferred-preemption scheduling. For example, a host CPU may select one or more locations in a GPU command stream as being one or more locations at which preemption is allowed to occur in response to receiving a preemption notification, and may place one or more tokens in the GPU command stream based on the selected one or more locations. The tokens may indicate to the GPU that preemption is allowed to occur at the selected one or more locations. This disclosure further describes a GPU configured to preempt execution of a GPU command stream based on one or more tokens placed in a GPU command stream.

    TECHNIQUES FOR REDUCING MEMORY ACCESS BANDWIDTH IN A GRAPHICS PROCESSING SYSTEM BASED ON DESTINATION ALPHA VALUES

    公开(公告)号:EP2820621B1

    公开(公告)日:2018-07-25

    申请号:EP13708941.3

    申请日:2013-02-08

    发明人: GRUBER, Andrew

    IPC分类号: G06T15/40

    摘要: This disclosure describes techniques for reducing memory access bandwidth in a graphics processing system based on destination alpha values. The techniques may include retrieving a destination alpha value from a bin buffer, the destination alpha value being generated in response to processing a first pixel associated with a first primitive. The techniques may further include determining, based on the destination alpha value, whether to perform an action that causes one or more texture values for a second pixel to not be retrieved from a texture buffer. In some examples, the action may include discarding the second pixel from a pixel processing pipeline prior to the second pixel arriving at a texture mapping stage of the pixel processing pipeline. The second pixel may be associated with a second primitive different than the first primitive.

    DEFERRED PREEMPTION TECHNIQUES FOR SCHEDULING GRAPHICS PROCESSING UNIT COMMAND STREAMS
    5.
    发明授权
    DEFERRED PREEMPTION TECHNIQUES FOR SCHEDULING GRAPHICS PROCESSING UNIT COMMAND STREAMS 有权
    程序抢占后卫位置规划功能的图形处理单元的命令流

    公开(公告)号:EP2875486B1

    公开(公告)日:2016-09-14

    申请号:EP13735503.8

    申请日:2013-06-20

    IPC分类号: G06T1/20 G06F9/48

    摘要: This disclosure is directed to deferred preemption techniques for scheduling graphics processing unit (GPU) command streams for execution on a GPU. A host CPU is described that is configured to control a GPU to perform deferred-preemption scheduling. For example, a host CPU may select one or more locations in a GPU command stream as being one or more locations at which preemption is allowed to occur in response to receiving a preemption notification, and may place one or more tokens in the GPU command stream based on the selected one or more locations. The tokens may indicate to the GPU that preemption is allowed to occur at the selected one or more locations. This disclosure further describes a GPU configured to preempt execution of a GPU command stream based on one or more tokens placed in a GPU command stream.

    SYNCHRONIZATION OF SHADER OPERATION
    7.
    发明授权
    SYNCHRONIZATION OF SHADER OPERATION 有权
    SHADER操作的同步

    公开(公告)号:EP2734923B1

    公开(公告)日:2018-01-03

    申请号:EP12737926.1

    申请日:2012-06-25

    发明人: GRUBER, Andrew

    IPC分类号: G06F9/54 G06F9/52 G06T1/20

    摘要: The example techniques described in this disclosure may be directed to synchronization between producer shaders and consumer shaders. For example, a graphics processing unit (GPU) may execute a producer shader to produce graphics data. After the completion of the production of graphics data, the producer shader may store a value indicative of the amount of produced graphics data. The GPU may execute one or more consumer shaders, after the storage of the value indicative of the amount of produced graphics data, to consume the produced graphics data.

    TECHNIQUES FOR REDUCING MEMORY ACCESS BANDWIDTH IN A GRAPHICS PROCESSING SYSTEM BASED ON DESTINATION ALPHA VALUES
    8.
    发明公开
    TECHNIQUES FOR REDUCING MEMORY ACCESS BANDWIDTH IN A GRAPHICS PROCESSING SYSTEM BASED ON DESTINATION ALPHA VALUES 有权
    技术,以减少存储器访问带宽图形处理系统基于目标α值

    公开(公告)号:EP2820621A1

    公开(公告)日:2015-01-07

    申请号:EP13708941.3

    申请日:2013-02-08

    发明人: GRUBER, Andrew

    IPC分类号: G06T15/40

    摘要: This disclosure describes techniques for reducing memory access bandwidth in a graphics processing system based on destination alpha values. The techniques may include retrieving a destination alpha value from a bin buffer, the destination alpha value being generated in response to processing a first pixel associated with a first primitive. The techniques may further include determining, based on the destination alpha value, whether to perform an action that causes one or more texture values for a second pixel to not be retrieved from a texture buffer. In some examples, the action may include discarding the second pixel from a pixel processing pipeline prior to the second pixel arriving at a texture mapping stage of the pixel processing pipeline. The second pixel may be associated with a second primitive different than the first primitive.

    摘要翻译: 本发明描述用于基于目的地阿尔法值减少在图形处理系统中的存储器访问带宽的技术。 所述技术可包括检索从仓缓冲目的地阿尔法值,响应于处理与第一原语相关联的第一个像素中产生的目的地阿尔法值。 所述技术可包括确定性此外采矿,基于目的地阿尔法值,是否在动作用于向不从纹理缓冲器中检索到的第二像素执行做的原因的一个或多个纹理值。 在一些实例中,所述动作可以包括从之前到达像素处理管线的纹理映射级的第二像素的像素处理流水线丢弃所述第二像素。 第二像素可以具有比第一原语的第二原始不同相关联。

    SYNCHRONIZATION OF SHADER OPERATION
    9.
    发明公开
    SYNCHRONIZATION OF SHADER OPERATION 有权
    同步冯·沙德恩

    公开(公告)号:EP2734923A1

    公开(公告)日:2014-05-28

    申请号:EP12737926.1

    申请日:2012-06-25

    发明人: GRUBER, Andrew

    IPC分类号: G06F9/54 G06F9/52

    摘要: The example techniques described in this disclosure may be directed to synchronization between producer shaders and consumer shaders. For example, a graphics processing unit (GPU) may execute a producer shader to produce graphics data. After the completion of the production of graphics data, the producer shader may store a value indicative of the amount of produced graphics data. The GPU may execute one or more consumer shaders, after the storage of the value indicative of the amount of produced graphics data, to consume the produced graphics data.

    摘要翻译: 本公开中描述的示例技术可以涉及生成器着色器和消费者着色器之间的同步。 例如,图形处理单元(GPU)可以执行生成器着色器来产生图形数据。 生产图形数据完成后,生产者着色器可以存储指示生成的图形数据量的值。 在存储指示所产生的图形数据的量的值之后,GPU可以执行一个或多个消费者着色器,以消耗所产生的图形数据。

    COMPUTATIONAL RESOURCE PIPELINING IN GENERAL PURPOSE GRAPHICS PROCESSING UNIT
    10.
    发明公开
    COMPUTATIONAL RESOURCE PIPELINING IN GENERAL PURPOSE GRAPHICS PROCESSING UNIT 审中-公开
    计算能力资源流水线处理一机多用图形处理单元

    公开(公告)号:EP2663921A1

    公开(公告)日:2013-11-20

    申请号:EP12704456.8

    申请日:2012-01-13

    IPC分类号: G06F9/44 G06F15/78 G06F15/82

    CPC分类号: G06F15/17325

    摘要: This disclosure describes techniques for extending the architecture of a general purpose graphics processing unit (GPGPU) with parallel processing units to allow efficient processing of pipeline-based applications. The techniques include configuring local memory buffers connected to parallel processing units operating as stages of a processing pipeline to hold data for transfer between the parallel processing units. The local memory buffers allow on-chip, low-power, direct data transfer between the parallel processing units. The local memory buffers may include hardware-based data flow control mechanisms to enable transfer of data between the parallel processing units. In this way, data may be passed directly from one parallel processing unit to the next parallel processing unit in the processing pipeline via the local memory buffers, in effect transforming the parallel processing units into a series of pipeline stages.