System and method for enabling interoperability between application programming interfaces
    1.
    发明授权
    System and method for enabling interoperability between application programming interfaces 有权
    用于实现应用程序编程接口之间的互操作性的系统和方法

    公开(公告)号:US08539516B1

    公开(公告)日:2013-09-17

    申请号:US12031678

    申请日:2008-02-14

    CPC分类号: G06F9/541 G06F9/526

    摘要: One embodiment of the present invention sets forth a method for sharing graphics objects between a compute unified device architecture (CUDA) application programming interface (API) and a graphics API. The CUDA API includes calls used to alias graphics objects allocated by the graphics API and, subsequently, synchronize accesses to the graphics objects. When an application program emits a “register” call that targets a particular graphics object, the CUDA API ensures that the graphics object is in the device memory, and maps the graphics object into the CUDA address space. Subsequently, when the application program emits “map” and “unmap” calls, the CUDA API respectively enables and disables accesses to the graphics object through the CUDA API. Further, the CUDA API uses semaphores to synchronize accesses to the shared graphics object. Finally, when the application program emits an “unregister” call, the CUDA API configures the computing system to disregard interoperability constraints.

    摘要翻译: 本发明的一个实施例提出了一种用于在计算统一设备架构(CUDA)应用编程接口(API)和图形API之间共享图形对象的方法。 CUDA API包括用于别名由图形API分配的图形对象的调用,并且随后同步对图形对象的访问。 当应用程序发出针对特定图形对象的“注册”调用时,CUDA API确保图形对象位于设备内存中,并将图形对象映射到CUDA地址空间。 随后,当应用程序发出“映射”和“映射”调用时,CUDA API分别启用和禁用通过CUDA API访问图形对象。 此外,CUDA API使用信号量来同步对共享图形对象的访问。 最后,当应用程序发出“注销”调用时,CUDA API会将计算系统配置为忽略互操作性约束。

    System and method for enabling interoperability between application programming interfaces
    2.
    发明授权
    System and method for enabling interoperability between application programming interfaces 有权
    用于实现应用程序编程接口之间的互操作性的系统和方法

    公开(公告)号:US08402229B1

    公开(公告)日:2013-03-19

    申请号:US12031682

    申请日:2008-02-14

    IPC分类号: G06F13/16

    CPC分类号: G09G5/001

    摘要: One embodiment of the present invention sets forth a method for sharing graphics objects between a compute unified device architecture (CUDA) application programming interface (API) and a graphics API. The CUDA API includes calls used to alias graphics objects allocated by the graphics API and, subsequently, synchronize accesses to the graphics objects. When an application program emits a “register” call that targets a particular graphics object, the CUDA API ensures that the graphics object is in the device memory, and maps the graphics object into the CUDA address space. Subsequently, when the application program emits “map” and “unmap” calls, the CUDA API respectively enables and disables accesses to the graphics object through the CUDA API. Further, the CUDA API uses semaphores to synchronize accesses to the shared graphics object. Finally, when the application program emits an “unregister” call, the CUDA API configures the computing system to disregard interoperability constraints.

    摘要翻译: 本发明的一个实施例提出了一种用于在计算统一设备架构(CUDA)应用编程接口(API)和图形API之间共享图形对象的方法。 CUDA API包括用于别名由图形API分配的图形对象的调用,并且随后同步对图形对象的访问。 当应用程序发出针对特定图形对象的注册调用时,CUDA API可确保图形对象位于设备内存中,并将图形对象映射到CUDA地址空间。 随后,当应用程序发出映射和取消映射调用时,CUDA API分别启用和禁用通过CUDA API访问图形对象。 此外,CUDA API使用信号量来同步对共享图形对象的访问。 最后,当应用程序发出未注册的呼叫时,CUDA API将计算系统配置为忽略互操作性约束。