HALO TRANSFER FOR CONVOLUTION WORKLOAD PARTITION

    公开(公告)号:US20230116629A1

    公开(公告)日:2023-04-13

    申请号:US18046256

    申请日:2022-10-13

    Abstract: A DNN accelerator includes multiple compute tiles for sharing a workload of running a convolution. A halo pipeline in a compute tile can facilitate replications of halo data from the compute tile where the halo data is generated into another compute tile. The halo pipeline may receive a memory transaction for writing a data block. The halo pipeline may determine that the data block falls into a halo region in an input tensor of the convolution. The halo pipeline may generate a remote address for storing the data block in a memory of the other compute tile, e.g., based on a local address of the data block in a memory of the compute tile. The halo pipeline may adjust the remote address, e.g., based on a difference in dimensions of a tensor to be used by the compute tile and a tensor to be used by the other compute tile.

    METHODS AND APPARATUS FOR PERFORMING A MACHINE LEARNING OPERATION USING STORAGE ELEMENT POINTERS

    公开(公告)号:US20220108135A1

    公开(公告)日:2022-04-07

    申请号:US17554970

    申请日:2021-12-17

    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for performing a machine learning operation using storage element pointers. An example computer readable medium comprises instructions that when executed, cause at least one processor to select, in response to a determination that a machine learning operation is to be performed, create first and second storage element pointers based on a type of machine learning operation to be performed, remap input tensor data of the input tensor based on the first storage element pointer without movement of the input tensor data in memory, cause execution of the machine learning operation with the remapped input tensor data to create intermediate tensor data, remap the intermediate tensor data based on the second storage element pointer without movement of the intermediate tensor data in memory, and provide the remapped intermediate tensor data as an output tensor.

    SYSTEMS, APPARATUS, AND METHODS TO DEBUG ACCELERATOR HARDWARE

    公开(公告)号:US20220012164A1

    公开(公告)日:2022-01-13

    申请号:US17483521

    申请日:2021-09-23

    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to debug a hardware accelerator such as a neural network accelerator for executing Artificial Intelligence computational workloads. An example apparatus includes a core with a core input and a core output to execute executable code based on a machine-learning model to generate a data output based on a data input, and debug circuitry coupled to the core. The debug circuitry is configured to detect a breakpoint associated with the machine-learning model, compile executable code based on at least one of the machine-learning model or the breakpoint. In response to the triggering of the breakpoint, the debug circuitry is to stop the execution of the executable code and output data such as the data input, data output and the breakpoint for debugging the hardware accelerator.

    METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO INCREASE DATA REUSE FOR MULTIPLY AND ACCUMULATE (MAC) OPERATIONS

    公开(公告)号:US20220012058A1

    公开(公告)日:2022-01-13

    申请号:US17484780

    申请日:2021-09-24

    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed that increase data reuse for multiply and accumulate (MAC) operations. An example apparatus includes a MAC circuit to process a first context of a set of a first type of contexts stored in a first buffer and a first context of a set of a second type of contexts stored in a second buffer. The example apparatus also includes control logic circuitry to, in response to determining that there is an additional context of the second type to be processed in the set of the second type of contexts, maintain the first context of the first type in the first buffer. The control logic circuitry is also to, in response to determining that there is an additional context of the first type to be processed in the set of the first type of contexts maintain the first context of the second type in the second buffer and iterate a pointer of the second buffer from a first position to a next position in the second buffer.

    Methods, apparatus, and articles of manufacture to increase data reuse for multiply and accumulate (MAC) operations

    公开(公告)号:US12169643B2

    公开(公告)日:2024-12-17

    申请号:US18465560

    申请日:2023-09-12

    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed that increase data reuse for multiply and accumulate (MAC) operations. An example apparatus includes a MAC circuit to process a first context of a set of a first type of contexts stored in a first buffer and a first context of a set of a second type of contexts stored in a second buffer. The example apparatus also includes control logic circuitry to, in response to determining that there is an additional context of the second type to be processed in the set of the second type of contexts, maintain the first context of the first type in the first buffer. The control logic circuitry is also to, in response to determining that there is an additional context of the first type to be processed in the set of the first type of contexts maintain the first context of the second type in the second buffer and iterate a pointer of the second buffer from a first position to a next position in the second buffer.

    METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS

    公开(公告)号:US20240134786A1

    公开(公告)日:2024-04-25

    申请号:US18539955

    申请日:2023-12-14

    CPC classification number: G06F12/0207 G06F12/0292 G06N3/10

    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for sparse tensor storage for neural network accelerators. An example apparatus includes sparsity map generating circuitry to generate a sparsity map corresponding to a tensor, the sparsity map to indicate whether a data point of the tensor is zero, static storage controlling circuitry to divide the tensor into one or more storage elements, and a compressor to perform a first compression of the one or more storage elements to generate one or more compressed storage elements, the first compression to remove zero points of the one or more storage elements based on the sparsity map and perform a second compression of the one or more compressed storage elements, the second compression to store the one or more compressed storage elements contiguously in memory.

    METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS

    公开(公告)号:US20210406164A1

    公开(公告)日:2021-12-30

    申请号:US17359217

    申请日:2021-06-25

    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for sparse tensor storage for neural network accelerators. An example apparatus includes sparsity map generating circuitry to generate a sparsity map corresponding to a tensor, the sparsity map to indicate whether a data point of the tensor is zero, static storage controlling circuitry to divide the tensor into one or more storage elements, and a compressor to perform a first compression of the one or more storage elements to generate one or more compressed storage elements, the first compression to remove zero points of the one or more storage elements based on the sparsity map and perform a second compression of the one or more compressed storage elements, the second compression to store the one or more compressed storage elements contiguously in memory.

    GRAPH ORCHESTRATOR FOR NEURAL NETWORK EXECUTION

    公开(公告)号:US20240354162A1

    公开(公告)日:2024-10-24

    申请号:US18756006

    申请日:2024-06-27

    CPC classification number: G06F9/5027

    Abstract: A barrier may be inserted into a graph representing workloads in an execution of a neural network and placed between a producing workload performed by a producer and a consuming workload performed by a consumer. The consuming workload is to be performed using data generated from the producing workload. A graph orchestrator may modify status information of the barrier in response to receiving a message from the producer. The status information indicates whether one or more producing workloads associated with the barrier are complete. The message indicates that the producing workload is complete. The graph orchestrator may determine whether the one or more producing workloads are complete based on the modified status information. In response to determining that the one or more producing workloads are complete, the graph orchestrator may provide a barrier lift message to the consumer. The barrier lift message causing the consumer to start the consuming workload.

    SYSTEMS, APPARATUS, AND METHODS TO DEBUG ACCELERATOR HARDWARE

    公开(公告)号:US20240118992A1

    公开(公告)日:2024-04-11

    申请号:US18487490

    申请日:2023-10-16

    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to debug a hardware accelerator such as a neural network accelerator for executing Artificial Intelligence computational workloads. An example apparatus includes a core with a core input and a core output to execute executable code based on a machine-learning model to generate a data output based on a data input, and debug circuitry coupled to the core. The debug circuitry is configured to detect a breakpoint associated with the machine-learning model, compile executable code based on at least one of the machine-learning model or the breakpoint. In response to the triggering of the breakpoint, the debug circuitry is to stop the execution of the executable code and output data such as the data input, data output and the breakpoint for debugging the hardware accelerator.

Patent Agency Ranking