COMPUTE TASK STATE ENCAPSULATION
    2.
    发明申请

    公开(公告)号:US20210019185A1

    公开(公告)日:2021-01-21

    申请号:US17063705

    申请日:2020-10-05

    IPC分类号: G06F9/48 G06F9/46 G06F9/50

    摘要: One embodiment of the present invention sets forth a technique for encapsulating compute task state that enables out-of-order scheduling and execution of the compute tasks. The scheduling circuitry organizes the compute tasks into groups based on priority levels. The compute tasks may then be selected for execution using different scheduling schemes. Each group is maintained as a linked list of pointers to compute tasks that are encoded as task metadata (TMD) stored in memory. A TMD encapsulates the state and parameters needed to initialize, schedule, and execute a compute task.

    PCIE TRAFFIC TRACKING HARDWARE IN A UNIFIED VIRTUAL MEMORY SYSTEM

    公开(公告)号:US20190340145A1

    公开(公告)日:2019-11-07

    申请号:US16450830

    申请日:2019-06-24

    IPC分类号: G06F13/40 G06F12/123

    摘要: Techniques are disclosed for tracking memory page accesses in a unified virtual memory system. An access tracking unit detects a memory page access generated by a first processor for accessing a memory page in a memory system of a second processor. The access tracking unit determines whether a cache memory includes an entry for the memory page. If so, then the access tracking unit increments an associated access counter. Otherwise, the access tracking unit attempts to find an unused entry in the cache memory that is available for allocation. If so, then the access tracking unit associates the second entry with the memory page, and sets an access counter associated with the second entry to an initial value. Otherwise, the access tracking unit selects a valid entry in the cache memory; clears an associated valid bit; associates the entry with the memory page; and initializes an associated access counter.

    MICROCONTROLLER FOR MEMORY MANAGEMENT UNIT
    6.
    发明申请
    MICROCONTROLLER FOR MEMORY MANAGEMENT UNIT 有权
    内存管理单元的微控制器

    公开(公告)号:US20140281356A1

    公开(公告)日:2014-09-18

    申请号:US14011655

    申请日:2013-08-27

    IPC分类号: G06F12/10

    CPC分类号: G06F12/1009 G06F2212/301

    摘要: One embodiment of the present invention includes a microcontroller coupled to a memory management unit (MMU). The MMU is coupled to a page table included in a physical memory, and the microcontroller is configured to perform one or more virtual memory operations associated with the physical memory and the page table. In operation, the microcontroller receives a page fault generated by the MMU in response to an invalid memory access via a virtual memory address. To remedy such a page fault, the microcontroller performs actions to map the virtual memory address to an appropriate location in the physical memory. By contrast, in prior-art systems, a fault handler would typically remedy the page fault. Advantageously, because the microcontroller executes these tasks locally with respect to the MMU and the physical memory, latency associated with remedying page faults may be decreased. Consequently, overall system performance may be increased.

    摘要翻译: 本发明的一个实施例包括耦合到存储器管理单元(MMU)的微控制器。 MMU耦合到包括在物理存储器中的页表,并且微控制器被配置为执行与物理存储器和页表相关联的一个或多个虚拟存储器操作。 在操作中,微控制器响应于通过虚拟存储器地址的无效存储器访问而接收由MMU产生的页面错误。 为了纠正这种页面错误,微控制器执行操作以将虚拟存储器地址映射到物理存储器中的适当位置。 相比之下,在现有技术的系统中,故障处理器通常会补救页面错误。 有利地,由于微控制器相对于MMU和物理存储器在本地执行这些任务,所以与补救页错误相关联的延迟可能会降低。 因此,整体系统性能可能会增加。

    EFFICIENT MEMORY VIRTUALIZATION IN MULTI-THREADED PROCESSING UNITS
    7.
    发明申请
    EFFICIENT MEMORY VIRTUALIZATION IN MULTI-THREADED PROCESSING UNITS 审中-公开
    多线程处理单元的高效内存虚拟化

    公开(公告)号:US20140122829A1

    公开(公告)日:2014-05-01

    申请号:US13660815

    申请日:2012-10-25

    IPC分类号: G06F12/10

    摘要: A technique for simultaneously executing multiple tasks, each having an independent virtual address space, involves assigning an address space identifier (ASID) to each task and constructing each virtual memory access request to include both a virtual address and the ASID. During virtual to physical address translation, the ASID selects a corresponding page table, which includes virtual to physical address mappings for the ASID and associated task. Entries for a translation look-aside buffer (TLB) include both the virtual address and ASID to complete each mapping to a physical address. Deep scheduling of tasks sharing a virtual address space may be implemented to improve cache affinity for both TLB and data caches.

    摘要翻译: 一种用于同时执行多个任务的技术,每个任务具有独立的虚拟地址空间,包括为每个任务分配地址空间标识符(ASID),并且构建每个虚拟存储器访问请求以包括虚拟地址和ASID。 在虚拟到物理地址转换期间,ASID选择相应的页表,其中包括ASID和相关任务的虚拟到物理地址映射。 翻译后备缓冲区(TLB)的条目包括虚拟地址和ASID,以完成对物理地址的每个映射。 可以实现对共享虚拟地址空间的任务的深度调度,以提高对TLB和数据高速缓存的高速缓存亲和性。

    TECHNIQUES FOR OPTIMIZING STENCIL BUFFERS
    10.
    发明申请
    TECHNIQUES FOR OPTIMIZING STENCIL BUFFERS 有权
    优化STENCIL BUFFERS的技术

    公开(公告)号:US20150339799A1

    公开(公告)日:2015-11-26

    申请号:US14817151

    申请日:2015-08-03

    IPC分类号: G06T1/60 B41F15/34

    摘要: One embodiment sets forth a method for associating each stencil value included in a stencil buffer with multiple fragments. Components within a graphics processing pipeline use a set of stencil masks to partition the bits of each stencil value. Each stencil mask selects a different subset of bits, and each fragment is strategically associated with both a stencil value and a stencil mask. Before performing stencil actions associated with a fragment, the raster operations unit performs stencil mask operations on the operands. No fragments are associated with both the same stencil mask and the same stencil value. Consequently, no fragments are associated with the same stencil bits included in the stencil buffer. Advantageously, by reducing the number of stencil bits associated with each fragment, certain classes of software applications may reduce the wasted memory associated with stencil buffers in which each stencil value is associated with a single fragment.

    摘要翻译: 一个实施例提出了一种用于将包括在模板缓冲器中的每个模版值与多个片段相关联的方法。 图形处理流水线中的组件使用一组模板掩模来分割每个模板值的位。 每个模板掩模选择不同的位子集,并且每个片段与模板值和模板掩模两者战略性地相关联。 在执行与片段相关联的模板操作之前,光栅操作单元对操作数执行模板掩码操作。 没有碎片与相同的模板掩模和相同的模板值相关联。 因此,没有碎片与包括在模板缓冲器中的相同模板位相关联。 有利地,通过减少与每个片段相关联的模板位的数量,某些类别的软件应用可以减少与模板缓冲器相关联的浪费的存储器,其中每个模板值与单个片段相关联。