-
21.
公开(公告)号:US11663471B2
公开(公告)日:2023-05-30
申请号:US16912846
申请日:2020-06-26
Applicant: SanDisk Technologies LLC
Inventor: Tung Thanh Hoang , Won Ho Choi , Martin Lueker-Boden
CPC classification number: G06N3/08 , G06F12/0207 , G06F12/0238 , G06F13/1668 , G06N3/04 , G06N3/063
Abstract: Non-volatile memory structures for performing compute in memory inferencing for neural networks are presented. To improve performance, both in terms of speed and energy consumption, weight matrices are replaced with their singular value decomposition (SVD) and use of a low rank approximations (LRAs). The decomposition matrices can be stored in a single array, with the resultant LRA matrices requiring fewer weight values to be stored. The reduced sizes of the LRA matrices allow for inferencing to be performed more quickly and with less power. In a high performance and energy efficiency mode, a reduced rank for the SVD matrices stored on a memory die is determined and used to increase performance and reduce power needed for an inferencing operation.
-
公开(公告)号:US11556616B2
公开(公告)日:2023-01-17
申请号:US16655575
申请日:2019-10-17
Applicant: SanDisk Technologies LLC
Inventor: Minghai Qin , Pi-Feng Chiu , Wen Ma , Won Ho Choi
Abstract: Systems and methods for reducing the impact of defects within a crossbar memory array when performing multiplication operations in which multiple control lines are concurrently selected are described. A group of memory cells within the crossbar memory array may be controlled by a local word line that is controlled by a local word line gating unit that may be configured to prevent the local word line from being biased to a selected word line voltage during an operation; the local word line may instead be set to a disabling voltage during the operation such that the memory cell currents through the group of memory cells are eliminated. If a defect has caused a short within one of the memory cells of the group of memory cells, then the local word line gating unit may be programmed to hold the local word line at the disabling voltage during multiplication operations.
-
公开(公告)号:US11556311B2
公开(公告)日:2023-01-17
申请号:US16850395
申请日:2020-04-16
Applicant: SanDisk Technologies LLC
Inventor: Wen Ma , Pi-Feng Chiu , Won Ho Choi , Martin Lueker-Boden
Abstract: Technology for reconfigurable input precision in-memory computing is disclosed herein. Reconfigurable input precision allows the bit resolution of input data to be changed to meet the requirements of in-memory computing operations. Voltage sources (that may include DACs) provide voltages that represent input data to memory cell nodes. The resolution of the voltage sources may be reconfigured to change the precision of the input data. In one parallel mode, the number of DACs in a DAC node is used to configure the resolution. In one serial mode, the number of cycles over which a DAC provides voltages is used to configure the resolution. The memory system may include relatively low resolution voltage sources, which avoids the need to have complex high resolution voltage sources (e.g., high resolution DACs). Lower resolution voltage sources can take up less area and/or use less power than higher resolution voltage sources.
-
公开(公告)号:US11361829B2
公开(公告)日:2022-06-14
申请号:US16775639
申请日:2020-01-29
Applicant: SANDISK TECHNOLOGIES LLC
Inventor: Federico Nardi , Won Ho Choi
IPC: G11C16/04 , G11C16/26 , H01L27/11565 , H01L27/11582 , H01L27/11519 , H01L27/11556
Abstract: Systems and methods for performing in-storage logic operations using one or more memory cell transistors and a programmable sense amplifier are described. The logic operations may comprise basic Boolean logic operations (e.g., OR and AND operations) or secondary Boolean logic operations (e.g., XOR and IMP operations). The one or more memory cell transistors may be used for storing user data during a first time period and then used for performing a logic operation during a second time period subsequent to the first time period. During the logic operation, a first memory cell transistor of the one or more memory cell transistors may be programmed with a threshold voltage that corresponds with a first input operand value and then a gate voltage bias may be applied to the first memory cell transistor during the logic operation that corresponds with a second input operand value.
-
25.
公开(公告)号:US11170290B2
公开(公告)日:2021-11-09
申请号:US16368441
申请日:2019-03-28
Applicant: SanDisk Technologies LLC
Inventor: Tung Thanh Hoang , Won Ho Choi , Martin Lueker-Boden
Abstract: Use of a NAND array architecture to realize a binary neural network (BNN) allows for matrix multiplication and accumulation to be performed within the memory array. A unit synapse for storing a weight of a BNN is stored in a pair of series connected memory cells. A binary input is applied as a pattern of voltage values on a pair of word lines connected to the unit synapse to perform the multiplication of the input with the weight by determining whether or not the unit synapse conducts. The results of such multiplications are determined by a sense amplifier, with the results accumulated by a counter. The arrangement can be extended to ternary inputs to realize a ternary-binary network (TBN) by adding a circuit to detect 0 input values and adjust the accumulated count accordingly.
-
公开(公告)号:US11152067B2
公开(公告)日:2021-10-19
申请号:US16253980
申请日:2019-01-22
Applicant: SanDisk Technologies LLC
Inventor: Won Ho Choi , Jongyeon Kim
Abstract: Ternary content addressable memory (TCAM) circuits are provided herein. In one example implementation, a TCAM circuit can include a first spin-orbit torque (SOT) magnetic tunnel junction (MTJ) element having a pinned layer coupled to a first read transistor controlled by a first search line, and having a spin hall effect (SHE) layer coupled in a first configuration across complemented write inputs. The TCAM circuit can include a second SOT MTJ element having a pinned layer coupled to a second read transistor controlled by a second search line, and having a SHE layer coupled in a second configuration across the complemented write inputs. The TCAM circuit can include a bias transistor configured to provide a bias voltage to drain terminals of the first read transistor and the second read transistor, and a voltage keeper element that couples the drain terminals to a match indicator line.
-
公开(公告)号:US10886459B2
公开(公告)日:2021-01-05
申请号:US16449895
申请日:2019-06-24
Applicant: SanDisk Technologies LLC
Inventor: Young-Suk Choi , Won Ho Choi
IPC: G11C7/00 , H01L43/02 , H01L43/08 , H01L27/22 , G11C11/16 , G06N3/063 , G06N3/04 , G11C11/00 , G11C11/54 , G11C11/401
Abstract: Apparatuses, systems, and methods are disclosed for magnetoresistive random access memory. A magnetic tunnel junction (MTJ) for storing data may include a reference layer. A free layer of an MTJ may be separated from a reference layer by a barrier layer. A free layer may be configured such that one or more resistance states for an MTJ correspond to one or more positions of a magnetic domain wall within the free layer. A domain stabilization layer may be coupled to a portion of a free layer, and may be configured to prevent migration of a domain wall into the portion of the free layer.
-
28.
公开(公告)号:US20200311523A1
公开(公告)日:2020-10-01
申请号:US16368441
申请日:2019-03-28
Applicant: SanDisk Technologies LLC
Inventor: Tung Thanh Hoang , Won Ho Choi , Martin Lueker-Boden
Abstract: Use of a NAND array architecture to realize a binary neural network (BNN) allows for matrix multiplication and accumulation to be performed within the memory array. A unit synapse for storing a weight of a BNN is stored in a pair of series connected memory cells. A binary input is applied as a pattern of voltage values on a pair of word lines connected to the unit synapse to perform the multiplication of the input with the weight by determining whether or not the unit synapse conducts. The results of such multiplications are determined by a sense amplifier, with the results accumulated by a counter. The arrangement can be extended to ternary inputs to realize a ternary-binary network (TBN) by adding a circuit to detect 0 input values and adjust the accumulated count accordingly.
-
29.
公开(公告)号:US20200035305A1
公开(公告)日:2020-01-30
申请号:US16414143
申请日:2019-05-16
Applicant: SanDisk Technologies LLC
Inventor: Won Ho Choi , Pi-Feng Chiu , Wen Ma , Martin Lueker-Boden
Abstract: Use of a non-volatile memory array architecture to realize a neural network (BNN) allows for matrix multiplication and accumulation to be performed within the memory array. A unit synapse for storing a weight of a neural network is formed by a differential memory cell of two individual memory cells, such as a memory cells having a programmable resistance, each connected between a corresponding one of a word line pair and a shared bit line. An input is applied as a pattern of voltage values on word line pairs connected to the unit synapses to perform the multiplication of the input with the weight by determining a voltage level on the shared bit line. The results of such multiplications are determined by a sense amplifier, with the results accumulated by a summation circuit. The approach can be extended from binary weights to multi-bit weight values by use of multiple differential memory cells for a weight.
-
公开(公告)号:US20200012924A1
公开(公告)日:2020-01-09
申请号:US16180462
申请日:2018-11-05
Applicant: SanDisk Technologies LLC
Inventor: Wen Ma , Minghai Qin , Won Ho Choi , Pi-Feng Chiu , Martin Van Lueker-Boden
Abstract: Enhanced techniques and circuitry are presented herein for artificial neural networks. These artificial neural networks are formed from artificial neurons, which in the implementations herein comprise a memory array having non-volatile memory elements. Neural connections among the artificial neurons are formed by interconnect circuitry coupled to input control lines and output control lines of the memory array to subdivide the memory array into a plurality of layers of the artificial neural network. Control circuitry is configured to transmit a plurality of iterations of an input value on input control lines of a first layer of the artificial neural network for inference operations by at least one or more additional layers. The control circuitry is also configured to apply an averaging function across output values successively presented on output control lines of a last layer of the artificial neural network from each iteration of the input value.
-
-
-
-
-
-
-
-
-