LOAD/STORE OPERATIONS IN TEXTURE HARDWARE
    3.
    发明申请
    LOAD/STORE OPERATIONS IN TEXTURE HARDWARE 有权
    纹理硬件中的装载/存储操作

    公开(公告)号:US20150084975A1

    公开(公告)日:2015-03-26

    申请号:US14038599

    申请日:2013-09-26

    CPC classification number: G06T1/60 G06F2212/302 G06T1/20 G06T15/04 G09G5/363

    Abstract: Approaches are disclosed for performing memory access operations in a texture processing pipeline having a first portion configured to process texture memory access operations and a second portion configured to process non-texture memory access operations. A texture unit receives a memory access request. The texture unit determines whether the memory access request includes a texture memory access operation. If the memory access request includes a texture memory access operation, then the texture unit processes the memory access request via at least the first portion of the texture processing pipeline, otherwise, the texture unit processes the memory access request via at least the second portion of the texture processing pipeline. One advantage of the disclosed approach is that the same processing and cache memory may be used for both texture operations and load/store operations to various other address spaces, leading to reduced surface area and power consumption.

    Abstract translation: 公开了用于在具有被配置为处理纹理存储器访问操作的第一部分的纹理处理流水线中执行存储器访问操作的方法和被配置为处理非纹理存储器访问操作的第二部分。 纹理单元接收存储器访问请求。 纹理单元确定存储器访问请求是否包括纹理存储器访问操作。 如果存储器访问请求包括纹理存储器访问操作,则纹理单元至少通过纹理处理流水线的第一部分来处理存储器访问请求,否则,纹理单元至少经由第二部分处理存储器访问请求 纹理处理流水线。 所公开方法的一个优点是可以将相同的处理和高速缓冲存储器用于纹理操作和对各种其他地址空间的加载/存储操作,导致减小的表面积和功率消耗。

    UNIFIED CACHE FOR DIVERSE MEMORY TRAFFIC
    4.
    发明申请

    公开(公告)号:US20180322078A1

    公开(公告)日:2018-11-08

    申请号:US15716461

    申请日:2017-09-26

    Abstract: A unified cache subsystem includes a data memory configured as both a shared memory and a local cache memory. The unified cache subsystem processes different types of memory transactions using different data pathways. To process memory transactions that target shared memory, the unified cache subsystem includes a direct pathway to the data memory. To process memory transactions that do not target shared memory, the unified cache subsystem includes a tag processing pipeline configured to identify cache hits and cache misses. When the tag processing pipeline identifies a cache hit for a given memory transaction, the transaction is rerouted to the direct pathway to data memory. When the tag processing pipeline identifies a cache miss for a given memory transaction, the transaction is pushed into a first-in first-out (FIFO) until miss data is returned from external memory. The tag processing pipeline is also configured to process texture-oriented memory transactions.

    UNIFIED CACHE FOR DIVERSE MEMORY TRAFFIC
    5.
    发明申请

    公开(公告)号:US20180322077A1

    公开(公告)日:2018-11-08

    申请号:US15587213

    申请日:2017-05-04

    Abstract: A unified cache subsystem includes a data memory configured as both a shared memory and a local cache memory. The unified cache subsystem processes different types of memory transactions using different data pathways. To process memory transactions that target shared memory, the unified cache subsystem includes a direct pathway to the data memory. To process memory transactions that do not target shared memory, the unified cache subsystem includes a tag processing pipeline configured to identify cache hits and cache misses. When the tag processing pipeline identifies a cache hit for a given memory transaction, the transaction is rerouted to the direct pathway to data memory. When the tag processing pipeline identifies a cache miss for a given memory transaction, the transaction is pushed into a first-in first-out (FIFO) until miss data is returned from external memory. The tag processing pipeline is also configured to process texture-oriented memory transactions.

Patent Agency Ranking