Building a run list for a coprocessor based on rules when the coprocessor switches from one context to another context
    11.
    发明授权
    Building a run list for a coprocessor based on rules when the coprocessor switches from one context to another context 有权
    当协处理器从一个上下文切换到另一个上下文时,基于规则构建协处理器的运行列表

    公开(公告)号:US09298498B2

    公开(公告)日:2016-03-29

    申请号:US12172910

    申请日:2008-07-14

    摘要: Techniques for minimizing coprocessor “starvation,” and for effectively scheduling processing in a coprocessor for greater efficiency and power. A run list is provided allowing a coprocessor to switch from one task to the next, without waiting for CPU intervention. A method called “surface faulting” allows a coprocessor to fault at the beginning of a large task rather than somewhere in the middle of the task. DMA control instructions, namely a “fence,” a “trap” and a “enable/disable context switching,” can be inserted into a processing stream to cause a coprocessor to perform tasks that enhance coprocessor efficiency and power. These instructions can also be used to build high-level synchronization objects. Finally, a “flip” technique is described that can switch a base reference for a display from one location to another, thereby changing the entire display surface.

    摘要翻译: 用于最小化协处理器“饥饿”的技术,并且用于在协处理器中有效地调度处理以获得更高的效率和功率。 提供运行列表,允许协处理器从一个任务切换到下一个任务,而不必等待CPU干预。 称为“表面故障”的方法允许协处理器在大任务开始时发生故障,而不是在任务中间的任何地方。 可以将DMA控制指令,即“围栏”,“陷阱”和“启用/禁用上下文切换”插入到处理流中,以使协处理器执行增强协处理器效率和功率的任务。 这些指令也可用于构建高级同步对象。 最后,描述了一种“翻转”技术,其可以将显示器的基准基准从一个位置切换到另一个位置,从而改变整个显示表面。

    Multithreaded kernel for graphics processing unit
    12.
    发明授权
    Multithreaded kernel for graphics processing unit 有权
    用于图形处理单元的多线程内核

    公开(公告)号:US08671411B2

    公开(公告)日:2014-03-11

    申请号:US12657278

    申请日:2010-01-15

    IPC分类号: G06F9/46 G06F3/00 G06T1/00

    摘要: Systems and methods are provided for scheduling the processing of a coprocessor whereby applications can submit tasks to a scheduler, and the scheduler can determine how much processing each application is entitled to as well as an order for processing. In connection with this process, tasks that require processing can be stored in physical memory or in virtual memory that is managed by a memory manager. The invention also provides various techniques of determining whether a particular task is ready for processing. A “run list” may be employed to ensure that the coprocessor does not waste time between tasks or after an interruption. The invention also provides techniques for ensuring the security of a computer system, by not allowing applications to modify portions of memory that are integral to maintaining the proper functioning of system operations.

    摘要翻译: 提供了用于调度协处理器的处理的系统和方法,由此应用可以将任务提交给调度器,并且调度器可以确定每个应用程序被处理多少处理以及处理顺序。 关于这个过程,需要处理的任务可以被存储在由存储器管理器管理的物理存储器或虚拟存储器中。 本发明还提供了确定特定任务是否准备好进行处理的各种技术。 可以使用“运行列表”来确保协处理器不会在任务之间或在中断之后浪费时间。 本发明还提供了用于确保计算机系统的安全性的技术,不允许应用修改为维持系统操作的正常功能而整体的部分存储器。

    Multithreaded kernel for graphics processing unit

    公开(公告)号:US07673304B2

    公开(公告)日:2010-03-02

    申请号:US10763777

    申请日:2004-01-22

    IPC分类号: G06F9/46 G06F15/167

    摘要: Systems and methods are provided for scheduling the processing of a coprocessor whereby applications can submit tasks to a scheduler, and the scheduler can determine how much processing each application is entitled to as well as an order for processing. In connection with this process, tasks that require processing can be stored in physical memory or in virtual memory that is managed by a memory manager. The invention also provides various techniques of determining whether a particular task is ready for processing. A “run list” may be employed to ensure that the coprocessor does not waste time between tasks or after an interruption. The invention also provides techniques for ensuring the security of a computer system, by not allowing applications to modify portions of memory that are integral to maintaining the proper functioning of system operations.

    PARALLEL ENGINE SUPPORT IN DISPLAY DRIVER MODEL
    15.
    发明申请
    PARALLEL ENGINE SUPPORT IN DISPLAY DRIVER MODEL 有权
    显示驱动器模型中的并行发动机支持

    公开(公告)号:US20080109810A1

    公开(公告)日:2008-05-08

    申请号:US11557301

    申请日:2006-11-07

    IPC分类号: G06F9/50

    摘要: Systems and methods that independently control divided and/or isolated processing resources of a Graphical Processing Unit (GPU). Synchronization primitives for processing are shared among such resources to process interaction with the engines and their associated different requirements (e.g. different language). Accordingly, independent threads can be created against particular nodes (e.g., a video engine node, 3D engine node), wherein multiple engines can exist under a single node, and independent control can subsequently be exerted upon the plurality of engines associated with the GPU.

    摘要翻译: 独立控制图形处理单元(GPU)的分割和/或隔离处理资源的系统和方法。 用于处理的同步原语在这些资源之间共享以处理与引擎及其相关联的不同要求(例如不同语言)的交互。 因此,可以针对特定节点(例如,视频引擎节点,3D引擎节点)创建独立线程,其中多个引擎可以存在于单个节点下,并且随后可以在与GPU相关联的多个引擎上施加独立控制。

    Method and system for efficiently transferring data objects within a graphics display system
    17.
    发明授权
    Method and system for efficiently transferring data objects within a graphics display system 有权
    用于在图形显示系统内有效地传送数据对象的方法和系统

    公开(公告)号:US06992668B2

    公开(公告)日:2006-01-31

    申请号:US10973494

    申请日:2004-10-26

    IPC分类号: G06T15/00

    CPC分类号: G06T15/005

    摘要: An API is provided to automatically transition data objects or containers between memory types to enable the seamless switching of data. The switching of data containers from one location to another is performed automatically by the API. Thus, polygon or pixel data objects are automatically transitioned between memory types such that the switching is seamless. It appears to a developer as if the data chunks/containers last forever, whereas in reality, the API hides the fact that the data is being transitioned to optimize system performance. The API hides an optimal cache managing algorithm from the developer so that the developer need not be concerned with the optimal tradeoff of system resources, and so that efficient switching of data can take place ‘behind the scenes’, thereby simplifying the developer's task. Data containers are thus efficiently placed in storage to maximize data processing rates and storage space, whether a data container is newly created or switched from one location to another.

    摘要翻译: 提供了一个API来自动转换内存类型之间的数据对象或容器,以实现数据的无缝切换。 数据容器从一个位置切换到另一个位置由API自动执行。 因此,多边形或像素数据对象在存储器类型之间自动转换,使得切换是无缝的。 开发人员似乎似乎数据块/容器永远持续,而实际上,API隐藏了数据正在转换以优化系统性能的事实。 API隐藏了来自开发人员的最佳缓存管理算法,以便开发人员无需关注系统资源的最佳权衡,从而可以在幕后进行有效的数据切换,从而简化开发人员的任务。 因此,无论数据容器是新创建还是从一个位置切换到另一个位置,数据容器因此被有效地放置在存储器中以最大化数据处理速率和存储空间。

    Usage semantics
    18.
    发明授权
    Usage semantics 有权
    使用语义

    公开(公告)号:US06839062B2

    公开(公告)日:2005-01-04

    申请号:US10373340

    申请日:2003-02-24

    IPC分类号: G06T15/50 G06T15/00

    摘要: Usage semantics allow for shaders to be authored independently of the actual vertex data and accordingly enables their reuse. Usage semantics define a feature that binds data between distinct components to allow them to work together. In various embodiments, the components include high level language variables that are bound by an application or by vertex data streams, high level language fragments to enable several fragments to be developed separately and compiled at a later time together to form a single shader, assembly language variables that get bound to vertex data streams, and parameters between vertex and pixel shaders. This allows developers to be able to program the shaders in the assembly and high level language with variables that refer to names rather than registers. By allowing this decoupling of registers from the language, developers can work on the language separately from the vertex data and modify and enhance high level language shaders without having to manually manipulate the registers. This also allows the same shaders to work on different sets of mesh data, allowing the shaders to be reused. Generally, semantics can be used as a data binding protocol between distinct areas of the programmable pipeline to allow for a more flexible workflow.

    摘要翻译: 使用语义允许独立于实际顶点数据创建着色器,从而使其可重用。 使用语义定义了在不同组件之间绑定数据以允许它们一起工作的功能。 在各种实施例中,组件包括由应用程序或顶点数据流约束的高级语言变量,高级语言片段,以使得能够分开开发多个片段并且一起编译在一起形成单个着色器,汇编语言 变量绑定到顶点数据流,以及顶点和像素着色器之间的参数。 这允许开发人员能够使用引用名称而不是寄存器的变量对组装和高级语言中的着色器进行编程。 通过允许寄存器与语言的分离,开发人员可以与顶点数据分开工作,修改和增强高级语言着色器,而无需手动操作寄存器。 这也允许相同的着色器在不同的网格数据集上工作,从而允许重复使用着色器。 通常,语义可以用作可编程管道的不同区域之间的数据绑定协议,以允许更灵活的工作流程。

    Method and system for efficiently transferring data objects within a graphics display system
    19.
    发明授权
    Method and system for efficiently transferring data objects within a graphics display system 有权
    用于在图形显示系统内有效地传送数据对象的方法和系统

    公开(公告)号:US06812923B2

    公开(公告)日:2004-11-02

    申请号:US09796889

    申请日:2001-03-01

    IPC分类号: G06T1500

    CPC分类号: G06T15/005

    摘要: An API is provided to automatically transition data objects or containers between memory types to enable the seamless switching of data. The switching of data containers from one location to another is performed automatically by the API. Thus, polygon or pixel data objects are automatically transitioned between memory types such that the switching is seamless. It appears to a developer as if the data chunks/containers last forever, whereas in reality, the API hides the fact that the data is being transitioned to optimize system performance. The API hides an optimal cache managing algorithm from the developer so that the developer need not be concerned with the optimal tradeoff of system resources, and so that efficient switching of data can take place ‘behind the scenes’, thereby simplifying the developer's task. Data containers are thus efficiently placed in storage to maximize data processing rates and storage space, whether a data container is newly created or switched from one location to another.

    摘要翻译: 提供了一个API来自动转换内存类型之间的数据对象或容器,以实现数据的无缝切换。 数据容器从一个位置切换到另一个位置由API自动执行。 因此,多边形或像素数据对象在存储器类型之间自动转换,使得切换是无缝的。 开发人员似乎似乎数据块/容器永远持续,而实际上,API隐藏了数据正在转换以优化系统性能的事实。 API隐藏了来自开发人员的最佳缓存管理算法,以便开发人员无需关注系统资源的最佳权衡,从而可以在幕后进行有效的数据切换,从而简化开发人员的任务。 因此,无论数据容器是新创建还是从一个位置切换到另一个位置,数据容器因此被有效地放置在存储器中以最大化数据处理速率和存储空间。