RECONFIGURABLE INPUT PRECISION IN-MEMORY COMPUTING

    公开(公告)号:US20210326110A1

    公开(公告)日:2021-10-21

    申请号:US16850395

    申请日:2020-04-16

    Abstract: Technology for reconfigurable input precision in-memory computing is disclosed herein. Reconfigurable input precision allows the bit resolution of input data to be changed to meet the requirements of in-memory computing operations. Voltage sources (that may include DACs) provide voltages that represent input data to memory cell nodes. The resolution of the voltage sources may be reconfigured to change the precision of the input data. In one parallel mode, the number of DACs in a DAC node is used to configure the resolution. In one serial mode, the number of cycles over which a DAC provides voltages is used to configure the resolution. The memory system may include relatively low resolution voltage sources, which avoids the need to have complex high resolution voltage sources (e.g., high resolution DACs). Lower resolution voltage sources can take up less area and/or use less power than higher resolution voltage sources.

    ACCELERATING SPARSE MATRIX MULTIPLICATION IN STORAGE CLASS MEMORY-BASED CONVOLUTIONAL NEURAL NETWORK INFERENCE

    公开(公告)号:US20210110235A1

    公开(公告)日:2021-04-15

    申请号:US16653346

    申请日:2019-10-15

    Abstract: Techniques are presented for accelerating in-memory matrix multiplication operations for a convolution neural network (CNN) inference in which the weights of a filter are stored in the memory of a storage class memory device, such as a ReRAM or phase change memory based device. To improve performance for inference operations when filters exhibit sparsity, a zero column index and a zero row index are introduced to account for columns and rows having all zero weight values. These indices can be saved in a register on the memory device and when performing a column/row oriented matrix multiplication, if the zero row/column index indicates that the column/row contains all zero weights, the access of the corresponding bit/word line is skipped as the result will be zero regardless of the input.

    Multi-resistance MRAM
    13.
    发明授权

    公开(公告)号:US10886458B2

    公开(公告)日:2021-01-05

    申请号:US16449876

    申请日:2019-06-24

    Abstract: Apparatuses, systems, and methods are disclosed for magnetoresistive random access memory. A magnetic tunnel junction for storing data may include a reference layer, a barrier layer, and a free layer. A barrier layer may be disposed between a reference layer and a free layer. A free layer may include a nucleation region and an arm. A nucleation region may be configured to form a magnetic domain wall. An arm may be narrower than a nucleation region and may extend from the nucleation region. An arm may include a plurality of pinning sites formed at predetermined locations along the arm for pinning a domain wall.

    Generating random bitstreams with magnetic tunnel junctions

    公开(公告)号:US10732933B2

    公开(公告)日:2020-08-04

    申请号:US15976384

    申请日:2018-05-10

    Inventor: Won Ho Choi

    Abstract: True random number generation (TRNG) circuits are presented which employ magnetic tunnel junction (MTJ) elements that can change magnetization state probabilistically in response to application of electrical pulses. Some implementations include pulse generators which apply perturbation sequences to the MTJ elements. The MTJ elements responsively produce randomized outputs related to changes in magnetization states. Probability compensators are included which monitor for deviations in measured probabilities in the randomized outputs from a target probability. The probability compensators make adjustments to the perturbation sequences to influence probabilistic changes in the magnetization states of the MTJ elements and bring the measured probabilities to within a predetermined deviation from the target probability.

    KERNEL TRANSFORMATION TECHNIQUES TO REDUCE POWER CONSUMPTION OF BINARY INPUT, BINARY WEIGHT IN-MEMORY CONVOLUTIONAL NEURAL NETWORK INFERENCE ENGINE

    公开(公告)号:US20210192325A1

    公开(公告)日:2021-06-24

    申请号:US16722580

    申请日:2019-12-20

    Abstract: Techniques are presented for performing in-memory matrix multiplication operations for binary input, binary weight valued convolution neural network (CNN) inferencing. The weights of a filter are stored in pairs of memory cells of a storage class memory device, such as a ReRAM or phase change memory based devices. To reduce current consumption, the binary valued filters are transformed into ternary valued filters by taking sums and differences of binary valued filter pairs. The zero valued weights of the transformed filters are stored as a pair of high resistance state memory cells, reducing current consumption during convolution. The results of the in-memory multiplications are pair-wise combined to compensate for the filter transformations. To compensate for zero valued weights, a zero weight register stores the number of zero weights along each bit line and is used to initialize counter values for accumulating the multiplication operations.

    METHODS TO TOLERATE PROGRAMMING AND RETENTION ERRORS OF CROSSBAR MEMORY ARRAYS

    公开(公告)号:US20210117499A1

    公开(公告)日:2021-04-22

    申请号:US16655575

    申请日:2019-10-17

    Abstract: Systems and methods for reducing the impact of defects within a crossbar memory array when performing multiplication operations in which multiple control lines are concurrently selected are described. A group of memory cells within the crossbar memory array may be controlled by a local word line that is controlled by a local word line gating unit that may be configured to prevent the local word line from being biased to a selected word line voltage during an operation; the local word line may instead be set to a disabling voltage during the operation such that the memory cell currents through the group of memory cells are eliminated. If a defect has caused a short within one of the memory cells of the group of memory cells, then the local word line gating unit may be programmed to hold the local word line at the disabling voltage during multiplication operations.

Patent Agency Ranking