MANAGING MEMORY REGIONS TO SUPPORT SPARSE MAPPINGS
    1.
    发明申请
    MANAGING MEMORY REGIONS TO SUPPORT SPARSE MAPPINGS 有权
    管理存储区域来支持SPARSE映射

    公开(公告)号:US20150097847A1

    公开(公告)日:2015-04-09

    申请号:US14046064

    申请日:2013-10-04

    CPC classification number: G09G5/39 G06F12/0897 G06F12/1027 G06T1/60

    Abstract: One embodiment of the present invention includes a memory management unit (MMU) that is configured to manage sparse mappings. The MMU processes requests to translate virtual addresses to physical addresses based on page table entries (PTEs) that indicate a sparse status. If the MMU determines that the PTE does not include a mapping from a virtual address to a physical address, then the MMU responds to the request based on the sparse status. If the sparse status is active, then the MMU determines the physical address based on whether the type of the request is a write operation and, subsequently, generates an acknowledgement of the request. By contrast, if the sparse status is not active, then the MMU generates a page fault. Advantageously, the disclosed embodiments enable the computer system to manage sparse mappings without incurring the performance degradation associated with both page faults and conventional software-based sparse mapping management.

    Abstract translation: 本发明的一个实施例包括被配置为管理稀疏映射的存储器管理单元(MMU)。 MMU根据指示稀疏状态的页表项(PTE)处理将虚拟地址转换为物理地址的请求。 如果MMU确定PTE不包括从虚拟地址到物理地址的映射,则MMU将根据稀疏状态对该请求进行响应。 如果稀疏状态为活动状态,则MMU将根据请求的类型是否为写入操作确定物理地址,然后生成请求的确认。 相比之下,如果稀疏状态不活动,则MMU会生成页面错误。 有利地,所公开的实施例使得计算机系统能够管理稀疏映射,而不会引起与页面故障和常规的基于软件的稀疏映射管理相关联的性能下降。

    LOAD/STORE OPERATIONS IN TEXTURE HARDWARE
    3.
    发明申请
    LOAD/STORE OPERATIONS IN TEXTURE HARDWARE 有权
    纹理硬件中的装载/存储操作

    公开(公告)号:US20150084975A1

    公开(公告)日:2015-03-26

    申请号:US14038599

    申请日:2013-09-26

    CPC classification number: G06T1/60 G06F2212/302 G06T1/20 G06T15/04 G09G5/363

    Abstract: Approaches are disclosed for performing memory access operations in a texture processing pipeline having a first portion configured to process texture memory access operations and a second portion configured to process non-texture memory access operations. A texture unit receives a memory access request. The texture unit determines whether the memory access request includes a texture memory access operation. If the memory access request includes a texture memory access operation, then the texture unit processes the memory access request via at least the first portion of the texture processing pipeline, otherwise, the texture unit processes the memory access request via at least the second portion of the texture processing pipeline. One advantage of the disclosed approach is that the same processing and cache memory may be used for both texture operations and load/store operations to various other address spaces, leading to reduced surface area and power consumption.

    Abstract translation: 公开了用于在具有被配置为处理纹理存储器访问操作的第一部分的纹理处理流水线中执行存储器访问操作的方法和被配置为处理非纹理存储器访问操作的第二部分。 纹理单元接收存储器访问请求。 纹理单元确定存储器访问请求是否包括纹理存储器访问操作。 如果存储器访问请求包括纹理存储器访问操作,则纹理单元至少通过纹理处理流水线的第一部分来处理存储器访问请求,否则,纹理单元至少经由第二部分处理存储器访问请求 纹理处理流水线。 所公开方法的一个优点是可以将相同的处理和高速缓冲存储器用于纹理操作和对各种其他地址空间的加载/存储操作,导致减小的表面积和功率消耗。

    UNIFIED CACHE FOR DIVERSE MEMORY TRAFFIC
    4.
    发明申请

    公开(公告)号:US20180322078A1

    公开(公告)日:2018-11-08

    申请号:US15716461

    申请日:2017-09-26

    Abstract: A unified cache subsystem includes a data memory configured as both a shared memory and a local cache memory. The unified cache subsystem processes different types of memory transactions using different data pathways. To process memory transactions that target shared memory, the unified cache subsystem includes a direct pathway to the data memory. To process memory transactions that do not target shared memory, the unified cache subsystem includes a tag processing pipeline configured to identify cache hits and cache misses. When the tag processing pipeline identifies a cache hit for a given memory transaction, the transaction is rerouted to the direct pathway to data memory. When the tag processing pipeline identifies a cache miss for a given memory transaction, the transaction is pushed into a first-in first-out (FIFO) until miss data is returned from external memory. The tag processing pipeline is also configured to process texture-oriented memory transactions.

    UNIFIED CACHE FOR DIVERSE MEMORY TRAFFIC
    5.
    发明申请

    公开(公告)号:US20180322077A1

    公开(公告)日:2018-11-08

    申请号:US15587213

    申请日:2017-05-04

    Abstract: A unified cache subsystem includes a data memory configured as both a shared memory and a local cache memory. The unified cache subsystem processes different types of memory transactions using different data pathways. To process memory transactions that target shared memory, the unified cache subsystem includes a direct pathway to the data memory. To process memory transactions that do not target shared memory, the unified cache subsystem includes a tag processing pipeline configured to identify cache hits and cache misses. When the tag processing pipeline identifies a cache hit for a given memory transaction, the transaction is rerouted to the direct pathway to data memory. When the tag processing pipeline identifies a cache miss for a given memory transaction, the transaction is pushed into a first-in first-out (FIFO) until miss data is returned from external memory. The tag processing pipeline is also configured to process texture-oriented memory transactions.

    APPROACH TO CACHING DECODED TEXTURE DATA WITH VARIABLE DIMENSIONS
    6.
    发明申请
    APPROACH TO CACHING DECODED TEXTURE DATA WITH VARIABLE DIMENSIONS 审中-公开
    使用可变尺寸缓存纹理数据的方法

    公开(公告)号:US20150097851A1

    公开(公告)日:2015-04-09

    申请号:US14049557

    申请日:2013-10-09

    CPC classification number: G06T1/60

    Abstract: A texture processing pipeline is configured to store decoded texture data within a cache unit in order to expedite the processing of texture requests. When a texture request is processed, the texture processing pipeline queries the cache unit to determine whether the requested data is resident in the cache. If the data is not resident in the cache unit, a cache miss occurs. The texture processing pipeline then reads encoded texture data from global memory, decodes that data, and writes different portions of the decoded memory into the cache unit at specific locations according to a caching map. If the data is, in fact, resident in the cache unit, a cache hit occurs, and the texture processing pipeline then reads decoded portions of the requested texture data from the cache unit and combines those portions according to the caching map.

    Abstract translation: 纹理处理流水线被配置为将解码的纹理数据存储在高速缓存单元内,以加速纹理请求的处理。 当处理纹理请求时,纹理处理流水线查询缓存单元以确定所请求的数据是否驻留在高速缓存中。 如果数据不驻留在高速缓存单元中,则会发生高速缓存未命中。 纹理处理流水线然后从全局存储器中读取编码的纹理数据,对该数据进行解码,并根据缓存图将特定位置处的解码存储器的不同部分写入高速缓存单元。 如果数据实际上驻留在高速缓存单元中,则发生高速缓存命中,并且纹理处理流水线然后从高速缓存单元读取所请求的纹理数据的解码部分,并根据缓存映射结合这些部分。

Patent Agency Ranking