TRANSLATION LOOKASIDE BUFFER FOR MULTIPLE CONTEXT COMPUTE ENGINE
    2.
    发明申请
    TRANSLATION LOOKASIDE BUFFER FOR MULTIPLE CONTEXT COMPUTE ENGINE 有权
    多语言计算机引擎的翻译LOOKASIDE缓冲区

    公开(公告)号:US20130262816A1

    公开(公告)日:2013-10-03

    申请号:US13993800

    申请日:2011-12-30

    IPC分类号: G06F12/10

    摘要: Some implementations disclosed herein provide techniques and arrangements for an specialized logic engine that includes translation lookaside buffer to support multiple threads executing on multiple cores. The translation lookaside buffer enables the specialized logic engine to directly access a virtual address of a thread executing on one of the plurality of processing cores. For example, an acceleration compute engine may receive one or more instructions from a thread executed by a processing core. The acceleration compute engine may retrieve, based on an address space identifier associated with the one or more instructions, a physical address associated with the one or more instructions from the translation lookaside buffer to execute the one or more instructions using the physical address.

    摘要翻译: 本文公开的一些实施例提供了专用逻辑引擎的技术和布置,其包括翻译后备缓冲器以支持在多个核上执行的多个线程。 翻译后备缓冲器使得专用逻辑引擎能够直接访问在多个处理核之一上执行的线程的虚拟地址。 例如,加速计算引擎可以从由处理核心执行的线程接收一个或多个指令。 所述加速度计算引擎可以基于与所述一个或多个指令相关联的地址空间标识符,从所述翻译后备缓冲器中检索与所述一个或多个指令相关联的物理地址,以使用所述物理地址来执行所述一个或多个指令。

    Translation lookaside buffer for multiple context compute engine
    3.
    发明授权
    Translation lookaside buffer for multiple context compute engine 有权
    用于多个上下文计算引擎的翻译后备缓冲区

    公开(公告)号:US09152572B2

    公开(公告)日:2015-10-06

    申请号:US13993800

    申请日:2011-12-30

    IPC分类号: G06F12/00 G06F12/10 G06F12/08

    摘要: Some implementations disclosed herein provide techniques and arrangements for an specialized logic engine that includes translation lookaside buffer to support multiple threads executing on multiple cores. The translation lookaside buffer enables the specialized logic engine to directly access a virtual address of a thread executing on one of the plurality of processing cores. For example, an acceleration compute engine may receive one or more instructions from a thread executed by a processing core. The acceleration compute engine may retrieve, based on an address space identifier associated with the one or more instructions, a physical address associated with the one or more instructions from the translation lookaside buffer to execute the one or more instructions using the physical address.

    摘要翻译: 本文公开的一些实施例提供了专用逻辑引擎的技术和布置,其包括翻译后备缓冲器以支持在多个核上执行的多个线程。 翻译后备缓冲器使得专用逻辑引擎能够直接访问在多个处理核之一上执行的线程的虚拟地址。 例如,加速计算引擎可以从由处理核心执行的线程接收一个或多个指令。 所述加速度计算引擎可以基于与所述一个或多个指令相关联的地址空间标识符,从所述翻译后备缓冲器中检索与所述一个或多个指令相关联的物理地址,以使用所述物理地址来执行所述一个或多个指令。

    SYNCHRONOUS SOFTWARE INTERFACE FOR AN ACCELERATED COMPUTE ENGINE
    4.
    发明申请
    SYNCHRONOUS SOFTWARE INTERFACE FOR AN ACCELERATED COMPUTE ENGINE 审中-公开
    用于加速计算机发动机的同步软件接口

    公开(公告)号:US20130268804A1

    公开(公告)日:2013-10-10

    申请号:US13994371

    申请日:2011-12-30

    IPC分类号: G06F9/54 G06F11/14

    摘要: Some implementations disclosed herein provide techniques and arrangements for a synchronous software interface for a specialized logic engine. The synchronous software interface may receive, from a first core of a plurality of cores, a control block including a transaction for execution by the specialized logic engine. The synchronous software interface may send the control block to the specialized logic engine and wait to receive a confirmation from the specialized logic engine that the transaction was successfully executed.

    摘要翻译: 本文中公开的一些实施例提供了用于专用逻辑引擎的同步软件接口的技术和布置。 同步软件接口可以从多个核心的第一核心接收包括专用逻辑引擎执行的事务的控制块。 同步软件接口可以将控制块发送到专用逻辑引擎,并等待从专门的逻辑引擎接收事务成功执行的确认。

    DYNAMIC PINNING OF VIRTUAL PAGES SHARED BETWEEN DIFFERENT TYPE PROCESSORS OF A HETEROGENEOUS COMPUTING PLATFORM
    5.
    发明申请
    DYNAMIC PINNING OF VIRTUAL PAGES SHARED BETWEEN DIFFERENT TYPE PROCESSORS OF A HETEROGENEOUS COMPUTING PLATFORM 有权
    异步计算平台的不同类型处理器之间共享的虚拟页面动态拼接

    公开(公告)号:US20130007406A1

    公开(公告)日:2013-01-03

    申请号:US13175489

    申请日:2011-07-01

    IPC分类号: G06F12/10

    摘要: A computer system may support one or more techniques to allow dynamic pinning of the memory pages accessed by a non-CPU device (e.g., a graphics processing unit, GPU). The non-CPU may support virtual to physical address mapping and may thus be aware of the memory pages, which may not be pinned but may be accessed by the non-CPU. The non-CPU may notify or send such information to a run-time component such as a device driver associated with the CPU. In one embodiment, the device driver may, dynamically, perform pinning of such memory pages, which may be accessed by the non-CPU. The device driver may even unpin the memory pages, which may be no longer accessed by the non-CPU. Such an approach may allow the memory pages, which may be no longer accessed by the non-CPU to be available for allocation to the other CPUs and/or non-CPUs.

    摘要翻译: 计算机系统可以支持一种或多种技术来允许由非CPU设备(例如,图形处理单元,GPU)访问的存储器页的动态固定。 非CPU可以支持虚拟到物理地址映射,并且因此可以知道可能不被固定但可被非CPU访问的存储器页。 非CPU可以向诸如与CPU相关联的设备驱动程序的运行时组件通知或发送这样的信息。 在一个实施例中,设备驱动程序可以动态地执行可由非CPU访问的这种存储器页的钉扎。 设备驱动程序甚至可以取消内存页,这可能不再被非CPU访问。 这样的方法可以允许非CPU可以不再访问的存储器页面可用于分配给其他CPU和/或非CPU。