SOFTWARE INTERFACE TO XPU ADDRESS TRANSLATION CACHE

    公开(公告)号:US20220414020A1

    公开(公告)日:2022-12-29

    申请号:US17899912

    申请日:2022-08-31

    Abstract: In an embodiment, a core includes at least one execution circuit. The core may be configured to: send a command for a first address translation cache (ATC) of a first device to perform an operation, the core to send the command to a first device queue of a shared memory, the first device queue associated with the first ATC; and send a register write directly to the first device to inform the first ATC regarding presence of the command in the first device queue. Other embodiments are described and claimed.

    SHARED ACCELERATOR MEMORY SYSTEMS AND METHODS

    公开(公告)号:US20200310993A1

    公开(公告)日:2020-10-01

    申请号:US16370587

    申请日:2019-03-29

    Abstract: The present disclosure is directed to systems and methods sharing memory circuitry between processor memory circuitry and accelerator memory circuitry in each of a plurality of peer-to-peer connected accelerator units. Each of the accelerator units includes physical-to-virtual address translation circuitry and migration circuitry. The physical-to-virtual address translation circuitry in each accelerator unit includes pages for each of at least some of the plurality of accelerator units. The migration circuitry causes the transfer of data between the processor memory circuitry and the accelerator memory circuitry in each of the plurality of accelerator circuits. The migration circuitry migrates and evicts data to/from accelerator memory circuitry based on statistical information associated with accesses to at least one of: processor memory circuitry or accelerator memory circuitry in one or more peer accelerator circuits. Thus, the processor memory circuitry and accelerator memory circuitry may be dynamically allocated to advantageously minimize system latency attributable to data access operations.

Patent Agency Ranking