Event driven and time hopping neural network

    公开(公告)号:US10922607B2

    公开(公告)日:2021-02-16

    申请号:US15394976

    申请日:2016-12-30

    Abstract: In one embodiment, a processor is to store a membrane potential of a neural unit of a neural network; and calculate, at a particular time-step of the neural network, a change to the membrane potential of the neural unit occurring over multiple time-steps that have elapsed since the last time-step at which the membrane potential was updated, wherein each of the multiple time-steps that have elapsed since the last time-step is associated with at least one input to the neural unit that affects the membrane potential of the neural unit.

    In-memory analog neural cache
    10.
    发明授权

    公开(公告)号:US11502696B2

    公开(公告)日:2022-11-15

    申请号:US16160800

    申请日:2018-10-15

    Abstract: Embodiments are directed to systems and methods of implementing an analog neural network using a pipelined SRAM architecture (“PISA”) circuitry disposed in on-chip processor memory circuitry. The on-chip processor memory circuitry may include processor last level cache (LLC) circuitry. One or more physical parameters, such as a stored charge or voltage, may be used to permit the generation of an in-memory analog output using a SRAM array. The generation of an in-memory analog output using only word-line and bit-line capabilities beneficially increases the computational density of the PISA circuit without increasing power requirements. Thus, the systems and methods described herein beneficially leverage the existing capabilities of on-chip SRAM processor memory circuitry to perform a relatively large number of analog vector/tensor calculations associated with execution of a neural network, such as a recurrent neural network, without burdening the processor circuitry and without significant impact to the processor power requirements.

Patent Agency Ranking