Look-ahead teleportation for reliable computation in multi-SIMD quantum processor

    公开(公告)号:US12079634B2

    公开(公告)日:2024-09-03

    申请号:US16794124

    申请日:2020-02-18

    IPC分类号: G06F9/38 G06F8/41 G06N10/00

    CPC分类号: G06F9/3887 G06F8/41 G06N10/00

    摘要: A technique for processing qubits in a quantum computing device is provided. The technique includes determining that, in a first cycle, a first quantum processing region is to perform a first quantum operation that does not use a qubit that is stored in the first quantum processing region, identifying a second quantum processing region that is to perform a second quantum operation at a second cycle that is later than the first cycle, wherein the second quantum operation uses the qubit, determining that between the first cycle and the second cycle, no quantum operations are performed in the second quantum processing region, and moving the qubit from the first quantum processing region to the second quantum processing region.

    Method and arrangement for handling memory access for a TCF-aware processor

    公开(公告)号:US12056495B2

    公开(公告)日:2024-08-06

    申请号:US17415890

    申请日:2019-12-20

    IPC分类号: G06F9/38 G06F9/52 G06F9/54

    摘要: An arrangement for handling shared data memory access for a TCF-aware processor. The arrangement comprises at least a flexible latency handling unit (601) comprising local memory (602) and related control logic, said local memory being provided for storing shared data memory access related data. The arrangement is configured to receive at least one TCF comprising at least one instruction, the at least one instruction being associated with at least one fiber, wherein the flexible latency handling unit is configured to determine if shared data memory access is required by the at least one instruction, if shared data memory access is required, send a shared data memory access request, via the flexible latency handling unit, observe, essentially continuously, if a reply to the shared data memory access request is received, suspend continued execution of the instruction until a reply is received, and continue execution of the instruction after receiving the reply so that the delay associated with the shared data memory access is dynamically determined by the actual required shared data memory access latency.