Spiking neural network accelerator using external memory

    公开(公告)号:US11593623B2

    公开(公告)日:2023-02-28

    申请号:US15853282

    申请日:2017-12-22

    Abstract: System configurations and techniques for implementation of a neural network in neuromorphic hardware with use of external memory resources are described herein. In an example, a system for processing spiking neural network operations includes: a plurality of neural processor clusters to maintain neurons of the neural network, with the clusters including circuitry to determine respective states of the neurons and internal memory to store the respective states of the neurons; and a plurality of axon processors to process synapse data of synapses in the neural network, with the processors including circuitry to retrieve synapse data of respective synapses from external memory, evaluate the synapse data based on a received spike message, and propagate another spike message to another neuron based on the synapse data. Further details for use and access of the external memory and processing configurations for such neural network operations are also disclosed.

    In-memory spiking neural networks for memory array architectures

    公开(公告)号:US11354568B2

    公开(公告)日:2022-06-07

    申请号:US15639997

    申请日:2017-06-30

    Abstract: Systems, apparatuses and methods may provide for a chip that includes a memory array having a plurality of rows corresponding to neurons in a spiking neural network (SNN) and a row decoder coupled to the memory array, wherein the row decoder activates a row in the memory array in response to a pre-synaptic spike in a neuron associated with the row. Additionally, the chip may include a sense amplifier coupled to the memory array, wherein the sense amplifier determines post-synaptic information corresponding to the activated row. In one example, the chip includes a processor to determine a state of a plurality of neurons in the SNN based at least in part on the post-synaptic information and conduct a memory array update, via the sense amplifier, of one or more synaptic weights in the memory array based on the state of the plurality of neurons.

    IN-MEMORY SPIKING NEURAL NETWORKS FOR MEMORY ARRAY ARCHITECTURES

    公开(公告)号:US20190005376A1

    公开(公告)日:2019-01-03

    申请号:US15639997

    申请日:2017-06-30

    Abstract: Systems, apparatuses and methods may provide for a chip that includes a memory array having a plurality of rows corresponding to neurons in a spiking neural network (SNN) and a row decoder coupled to the memory array, wherein the row decoder activates a row in the memory array in response to a pre-synaptic spike in a neuron associated with the row. Additionally, the chip may include a sense amplifier coupled to the memory array, wherein the sense amplifier determines post-synaptic information corresponding to the activated row. In one example, the chip includes a processor to determine a state of a plurality of neurons in the SNN based at least in part on the post-synaptic information and conduct a memory array update, via the sense amplifier, of one or more synaptic weights in the memory array based on the state of the plurality of neurons.

    Apparatus and method for a tensor permutation engine

    公开(公告)号:US11720362B2

    公开(公告)日:2023-08-08

    申请号:US17131424

    申请日:2020-12-22

    Inventor: Berkin Akin

    Abstract: An apparatus and method for a tensor permutation engine. The TPE may include a read address generation unit (AGU) to generate a plurality of read addresses for the plurality of tensor data elements in a first storage and a write AGU to generate a plurality of write addresses for the plurality of tensor data elements in the first storage. The TPE may include a shuffle register bank comprising a register to read tensor data elements from the plurality of read addresses generated by the read AGU, a first register bank to receive the tensor data elements, and a shift register to receive a lowest tensor data element from each bank in the first register bank, each tensor data element in the shift register to be written to a write address from the plurality of write addresses generated by the write AGU.

    NEUROMORPHIC ACCELERATOR MULTITASKING
    17.
    发明申请

    公开(公告)号:US20190042930A1

    公开(公告)日:2019-02-07

    申请号:US15937486

    申请日:2018-03-27

    Abstract: Systems and techniques for neuromorphic accelerator multitasking are described herein. A neuron address translation unit (NATU) may receive a spike message. Here, the spike message includes a physical neuron identifier (PNID) of a neuron causing the spike. The NATU may then translate the PNID into a network identifier (NID) and a local neuron identifier (LNID). The NATU locates synapse data based on the NID and communicates the synapse data and the LNID to an axon processor.

Patent Agency Ranking