HIGHLY SCALABLE ACCELERATOR
    1.
    发明申请

    公开(公告)号:US20210382836A1

    公开(公告)日:2021-12-09

    申请号:US17410063

    申请日:2021-08-24

    Abstract: Embodiments of apparatuses, methods, and systems for highly scalable accelerators are described. In an embodiment, an apparatus includes an interface to receive a plurality of work requests from a plurality of clients and a plurality of engines to perform the plurality of work requests. The work requests are to be dispatched to the plurality of engines from a plurality of work queues. The work queues are to store a work descriptor per work request. Each work descriptor is to include all information needed to perform a corresponding work request.

    Highly scalable accelerator
    2.
    发明授权

    公开(公告)号:US11106613B2

    公开(公告)日:2021-08-31

    申请号:US15940128

    申请日:2018-03-29

    Abstract: Embodiments of apparatuses, methods, and systems for highly scalable accelerators are described. In an embodiment, an apparatus includes an interface to receive a plurality of work requests from a plurality of clients and a plurality of engines to perform the plurality of work requests. The work requests are to be dispatched to the plurality of engines from a plurality of work queues. The work queues are to store a work descriptor per work request. Each work descriptor is to include all information needed to perform a corresponding work request.

    TECHNOLOGIES FOR OFFLOAD DEVICE FETCHING OF ADDRESS TRANSLATIONS

    公开(公告)号:US20210149815A1

    公开(公告)日:2021-05-20

    申请号:US17129496

    申请日:2020-12-21

    Abstract: Techniques for offload device address translation fetching are disclosed. In the illustrative embodiment, a processor of a compute device sends a translation fetch descriptor to an offload device before sending a corresponding work descriptor to the offload device. The offload device can request translations for virtual memory address and cache the corresponding physical addresses for later use. While the offload device is fetching virtual address translations, the compute device can perform other tasks before sending the corresponding work descriptor, including operations that modify the contents of the memory addresses whose translation are being cached. Even if the offload device does not cache the translations, the fetching can warm up the cache in a translation lookaside buffer. Such an approach can reduce the latency overhead that the offload device may otherwise incur in sending memory address translation requests that would be required to execute the work descriptor.

    HIGHLY SCALABLE ACCELERATOR
    5.
    发明公开

    公开(公告)号:US20230251986A1

    公开(公告)日:2023-08-10

    申请号:US18296875

    申请日:2023-04-06

    CPC classification number: G06F13/364 G06F9/5027 G06F13/24

    Abstract: Embodiments of apparatuses, methods, and systems for highly scalable accelerators are described. In an embodiment, an apparatus includes an interface to receive a plurality of work requests from a plurality of clients and a plurality of engines to perform the plurality of work requests. The work requests are to be dispatched to the plurality of engines from a plurality of work queues. The work queues are to store a work descriptor per work request. Each work descriptor is to include all information needed to perform a corresponding work request.

    Address translation for scalable linked devices

    公开(公告)号:US10969992B2

    公开(公告)日:2021-04-06

    申请号:US16236473

    申请日:2018-12-29

    Abstract: Systems, methods, and devices can include a processing engine implemented at least partially in hardware, the processing engine to process memory transactions; a memory element to index physical address and virtual address translations; and a memory controller logic implemented at least partially in hardware, the memory controller logic to receive an index from the processing engine, the index corresponding to a physical address and a virtual address; identify a physical address based on the received index; and provide the physical address to the processing engine. The processing engine can use the physical address for memory transactions in response to a streaming workload job request.

    HIGHLY SCALABLE ACCELERATOR
    7.
    发明申请

    公开(公告)号:US20250053530A1

    公开(公告)日:2025-02-13

    申请号:US18749130

    申请日:2024-06-20

    Abstract: Embodiments of apparatuses, methods, and systems for highly scalable accelerators are described. In an embodiment, an apparatus includes an interface to receive a plurality of work requests from a plurality of clients and a plurality of engines to perform the plurality of work requests. The work requests are to be dispatched to the plurality of engines from a plurality of work queues. The work queues are to store a work descriptor per work request. Each work descriptor is to include all information needed to perform a corresponding work request.

    Highly scalable accelerator
    9.
    发明授权

    公开(公告)号:US11650947B2

    公开(公告)日:2023-05-16

    申请号:US17410063

    申请日:2021-08-24

    CPC classification number: G06F13/364 G06F9/5027 G06F13/24

    Abstract: Embodiments of apparatuses, methods, and systems for highly scalable accelerators are described. In an embodiment, an apparatus includes an interface to receive a plurality of work requests from a plurality of clients and a plurality of engines to perform the plurality of work requests. The work requests are to be dispatched to the plurality of engines from a plurality of work queues. The work queues are to store a work descriptor per work request. Each work descriptor is to include all information needed to perform a corresponding work request.

Patent Agency Ranking