Isochronous agent data pinning in a multi-level memory system
    57.
    发明授权
    Isochronous agent data pinning in a multi-level memory system 有权
    多级存储器系统中的同步代理数据固定

    公开(公告)号:US09542336B2

    公开(公告)日:2017-01-10

    申请号:US14133097

    申请日:2013-12-18

    CPC classification number: G06F12/126

    Abstract: A processing device comprises an instruction execution unit, a memory agent and pinning logic to pin memory pages in a multi-level memory system upon request by the memory agent. The pinning logic includes an agent interface module to receive, from the memory agent, a pin request indicating a first memory page in the multi-level memory system, the multi-level memory system comprising a near memory and a far memory. The pinning logic further includes a memory interface module to retrieve the first memory page from the far memory and write the first memory page to the near memory. In addition, the pinning logic also includes a descriptor table management module to mark the first memory page as pinned in the near memory, wherein marking the first memory page as pinned comprises setting a pinning bit corresponding to the first memory page in a cache descriptor table and to prevent the first memory page from being evicted from the near memory when the first memory page is marked as pinned.

    Abstract translation: 处理设备包括指令执行单元,存储器代理和钉住逻辑,以在存储器请求时针对多级存储器系统中的存储器页面进行引脚。 钉扎逻辑包括代理接口模块,用于从存储器代理接收指示多级存储器系统中的第一存储器页的引脚请求,所述多级存储器系统包括近存储器和远存储器。 钉扎逻辑还包括存储器接口模块,用于从远端存储器检索第一存储器页面,并将第一存储器页面写入近端存储器。 此外,钉扎逻辑还包括描述符表管理模块,用于将第一存储器页标记为固定在近存储器中,其中将第一存储器页标记为固定包括将对应于第一存储器页的锁存位设置在高速缓存描述符表中 并且当第一存储器页面被标记为被固定时,防止第一存储器页被从近存储器逐出。

    Accelerator controller hub
    58.
    发明授权

    公开(公告)号:US12229069B2

    公开(公告)日:2025-02-18

    申请号:US17083200

    申请日:2020-10-28

    Abstract: Methods and apparatus for an accelerator controller hub (ACH). The ACH may be a stand-alone component or integrated on-die or on package in an accelerator such as a GPU. The ACH may include a host device link (HDL) interface, one or more Peripheral Component Interconnect Express (PCIe) interfaces, one or more high performance accelerator link (HPAL) interfaces, and a router, operatively coupled to each of the HDL interface, the one or more PCIe interfaces, and the one or more HPAL interfaces. The HDL interface is configured to be coupled to a host CPU via an HDL link and the one or more HPAL interfaces are configured to be coupled to one or more HPALs that are used to access high performance accelerator fabrics (HPAFs) such as NVlink fabrics and CCIX (Cache Coherent Interconnect for Accelerators) fabrics. Platforms including ACHs or accelerators with integrated ACHs support RDMA transfers using RDMA semantics to enable transfers between accelerator memory on initiators and targets without CPU involvement.

Patent Agency Ranking