-
公开(公告)号:US12249368B2
公开(公告)日:2025-03-11
申请号:US18645184
申请日:2024-04-24
Applicant: Silicon Storage Technology, Inc.
Inventor: Hieu Van Tran , Steven Lemke , Vipin Tiwari , Nhan Do , Mark Reiten
IPC: G11C11/54 , G06N3/045 , G11C16/04 , G11C16/10 , G11C16/14 , H01L29/423 , H01L29/788 , H10B41/30
Abstract: A neural network device with synapses having memory cells each having a floating gate and a first gate over first and second portions of a channel region disposed between source and drain regions, and a second gate over the floating gate or the source region. First lines each electrically connect the first gates in one of the memory cell rows, second lines each electrically connect the second gates in one of the memory cell rows, third lines each electrically connect the source regions in one of the memory cell rows, and fourth lines each electrically connect the drain regions in one of the memory cell columns. The synapses receive a first plurality of inputs as electrical voltages on the fourth lines, and provide a first plurality of outputs as electrical currents on the third lines.
-
公开(公告)号:US11875852B2
公开(公告)日:2024-01-16
申请号:US17140924
申请日:2021-01-04
Applicant: Silicon Storage Technology, Inc.
Inventor: Hieu Van Tran , Thuan Vu , Stanley Hong , Stephen Trinh , Anh Ly , Nhan Do , Mark Reiten
CPC classification number: G11C16/08 , G11C11/54 , G11C16/24 , G11C2216/04
Abstract: Numerous embodiments of analog neural memory arrays are disclosed. Certain embodiments comprise an adaptive bias decoder for providing additional bias to array input lines to compensate for instances where ground floats above 0V. This is useful, for example, to minimize the voltage drop for a read, program, or erase operation while maintaining accuracy in the operation.
-
公开(公告)号:US11683933B2
公开(公告)日:2023-06-20
申请号:US17121555
申请日:2020-12-14
Applicant: Silicon Storage Technology, Inc.
Inventor: Hieu Van Tran , Steven Lemke , Vipin Tiwari , Nhan Do , Mark Reiten
IPC: H01L27/11531 , G06N3/08 , G11C16/04 , H01L29/788
CPC classification number: H01L27/11531 , G06N3/08 , G11C16/0425 , H01L29/7883
Abstract: Numerous embodiments for reading a value stored in a selected memory cell in a vector-by-matrix multiplication (VMM) array in an artificial neural network are disclosed. In one embodiment, an input comprises a set of input bits that result in a series of input pulses applied to a terminal of the selected memory cell, further resulting in a series of output signals that are summed to determine the value stored in the selected memory cell. In another embodiment, an input comprises a set of input bits, where each input bit results in a single pulse or no pulse being applied to a terminal of the selected memory cell, further resulting in a series of output signals which are then weighted according to the binary bit location of the input bit, and where the weighted signals are then summed to determine the value stored in the selected memory cell.
-
公开(公告)号:US10910061B2
公开(公告)日:2021-02-02
申请号:US15990395
申请日:2018-05-25
Applicant: Silicon Storage Technology, Inc.
Inventor: Hieu Van Tran , Vipin Tiwari , Nhan Do , Mark Reiten
Abstract: Numerous embodiments of programming systems and methods for use with a vector-by-matrix multiplication (VMM) array in an artificial neural network are disclosed. Selected cells thereby can be programmed with extreme precision to hold one of N different values.
-
公开(公告)号:US20200242453A1
公开(公告)日:2020-07-30
申请号:US16382051
申请日:2019-04-11
Applicant: Silicon Storage Technology, Inc.
Inventor: Hieu Van Tran , Steven Lemke , Vipin Tiwari , Nhan Do , Mark Reiten
IPC: G06N3/063 , H01L27/115
Abstract: A neural network device with synapses having memory cells each having source and drain regions in a semiconductor substrate with a channel region extending there between, a floating gate over an entirety of the channel region, and a first gate over the floating gate. First lines each electrically connect together the first gates in one of the memory cell rows, second lines each electrically connect together the source regions in one of the memory cell rows, and third lines each electrically connect together the drain regions in one of the memory cell columns. The synapses are configured to receive a first plurality of inputs as electrical voltages on the first lines or on the second lines, and to provide a first plurality of outputs as electrical currents on the third lines.
-
公开(公告)号:US20200234758A1
公开(公告)日:2020-07-23
申请号:US16382045
申请日:2019-04-11
Applicant: Silicon Storage Technology, Inc.
Inventor: Hieu Van Tran , Steven Lemke , Vipin Tiwari , Nhan Do , Mark Reiten
IPC: G11C11/54 , H01L27/11521 , H01L29/423 , H01L29/788 , G06N3/04 , G11C16/10 , G11C16/14 , G11C16/04
Abstract: A neural network device with synapses having memory cells each having a floating gate and a first gate over first and second portions of a channel region disposed between source and drain regions, and a second gate over the floating gate or the source region. First lines each electrically connect the first gates in one of the memory cell rows, second lines each electrically connect the second gates in one of the memory cell rows, third lines each electrically connect the source regions in one of the memory cell columns, and fourth lines each electrically connect the drain regions in one of the memory cell columns. The synapses receive a first plurality of inputs as electrical voltages on the first or second lines, and provide a first plurality of outputs as electrical currents on the third or fourth lines.
-
7.
公开(公告)号:US20200119028A1
公开(公告)日:2020-04-16
申请号:US16231231
申请日:2018-12-21
Applicant: Silicon Storage Technology, Inc.
Inventor: Hieu Van Tran , Steven Lemke , Vipin Tiwari , Nhan Do , Mark Reiten
IPC: H01L27/11531 , H01L29/788 , G11C16/04 , G06N3/08
Abstract: Numerous embodiments of a precision tuning algorithm and apparatus are disclosed for precisely and quickly depositing the correct amount of charge on the floating gate of a non-volatile memory cell within a vector-by-matrix multiplication (VMM) array in an artificial neural network. Selected cells thereby can be programmed with extreme precision to hold one of N different values.
-
公开(公告)号:US10580491B2
公开(公告)日:2020-03-03
申请号:US16015020
申请日:2018-06-21
Applicant: Silicon Storage Technology, Inc.
Inventor: Vipin Tiwari , Hieu Van Tran , Nhan Do , Mark Reiten
IPC: G11C5/08 , G11C16/04 , H01L27/11521 , H01L29/788 , H01L29/423
Abstract: A memory device includes rows and columns of memory cells, word lines each connected to a memory cell row, bit lines each connected to a memory cell column, a word line driver connected to the word lines, a bit line driver connected to the bit lines, word line switches each disposed on one of the word lines for selectively connecting one memory cell row to the word line driver, and bit line switches each disposed on one of the bit lines for selectively connecting one memory cell column to the bit line driver. A controller controls the word line switches to connect only some of the rows of memory cells to the word line driver at a first point in time, and controls the bit line switches to connect only some of the columns of memory cells to the bit line driver at a second point in time.
-
公开(公告)号:US20170337466A1
公开(公告)日:2017-11-23
申请号:US15594439
申请日:2017-05-12
Inventor: Farnood Merrikh Bayat , Xinjie Guo , Dmitri Strukov , Nhan Do , Hieu Van Tran , Vipin Tiwari , Mark Reiten
CPC classification number: G06N3/04 , G06F3/061 , G06F3/0655 , G06F3/0688 , G06N3/0454 , G06N3/063
Abstract: An artificial neural network device that utilizes one or more non-volatile memory arrays as the synapses. The synapses are configured to receive inputs and to generate therefrom outputs. Neurons are configured to receive the outputs. The synapses include a plurality of memory cells, wherein each of the memory cells includes spaced apart source and drain regions formed in a semiconductor substrate with a channel region extending there between, a floating gate disposed over and insulated from a first portion of the channel region and a non-floating gate disposed over and insulated from a second portion of the channel region. Each of the plurality of memory cells is configured to store a weight value corresponding to a number of electrons on the floating gate. The plurality of memory cells are configured to multiply the inputs by the stored weight values to generate the outputs.
-
公开(公告)号:US12057160B2
公开(公告)日:2024-08-06
申请号:US18123921
申请日:2023-03-20
Inventor: Farnood Merrikh Bayat , Xinjie Guo , Dmitri Strukov , Nhan Do , Hieu Van Tran , Vipin Tiwari , Mark Reiten
IPC: G11C11/00 , G06F3/06 , G06N3/04 , G06N3/045 , G06N3/063 , G11C11/54 , G11C16/08 , G11C16/12 , G11C16/16 , G11C16/34 , G11C29/38
CPC classification number: G11C11/54 , G06F3/061 , G06F3/0655 , G06F3/0688 , G06N3/04 , G06N3/045 , G06N3/063 , G11C16/08 , G11C16/12 , G11C16/16 , G11C16/3436 , G11C29/38
Abstract: Numerous examples of summing circuits for a neural network are disclosed. In one example, a circuit for summing current received from a plurality of synapses in a neural network comprises a voltage source; a load coupled between the voltage source and an output node; a voltage clamp coupled to the output node for maintaining a voltage at the output node; and a plurality of synapses coupled between the output node and ground; wherein an output current flows through the output node, the output current equal to a sum of currents drawn by the plurality of synapses.
-
-
-
-
-
-
-
-
-