-
公开(公告)号:US20190057304A1
公开(公告)日:2019-02-21
申请号:US16160800
申请日:2018-10-15
Applicant: Amrita Mathuriya , Sasikanth Manipatruni , Victor Lee , Huseyin Sumbul , Gregory Chen , Raghavan Kumar , Phil Knag , Ram Krishnamurthy , IAN YOUNG , Abhishek Sharma
Inventor: Amrita Mathuriya , Sasikanth Manipatruni , Victor Lee , Huseyin Sumbul , Gregory Chen , Raghavan Kumar , Phil Knag , Ram Krishnamurthy , IAN YOUNG , Abhishek Sharma
Abstract: The present disclosure is directed to systems and methods of implementing an analog neural network using a pipelined SRAM architecture (“PISA”) circuitry disposed in on-chip processor memory circuitry. The on-chip processor memory circuitry may include processor last level cache (LLC) circuitry. One or more physical parameters, such as a stored charge or voltage, may be used to permit the generation of an in-memory analog output using a SRAM array. The generation of an in-memory analog output using only word-line and bit-line capabilities beneficially increases the computational density of the PISA circuit without increasing power requirements. Thus, the systems and methods described herein beneficially leverage the existing capabilities of on-chip SRAM processor memory circuitry to perform a relatively large number of analog vector/tensor calculations associated with execution of a neural network, such as a recurrent neural network, without burdening the processor circuitry and without significant impact to the processor power requirements.
-
公开(公告)号:US20190103156A1
公开(公告)日:2019-04-04
申请号:US16146932
申请日:2018-09-28
Applicant: Huseyin Ekin SUMBUL , Gregory K. CHEN , Raghavan KUMAR , Phil Ekin KNAG , Abhishek SHARMA , Sasikanth MANIPATRUNI , Amrita MATHURIYA , Ram A. KRISHNAMURTHY , Ian A. YOUNG
Inventor: Huseyin Ekin SUMBUL , Gregory K. CHEN , Raghavan KUMAR , Phil Ekin KNAG , Abhishek SHARMA , Sasikanth MANIPATRUNI , Amrita MATHURIYA , Ram A. KRISHNAMURTHY , Ian A. YOUNG
IPC: G11C11/419
CPC classification number: G11C11/419 , G11C7/1006 , G11C7/12 , G11C11/412 , G11C11/418 , G11C27/024
Abstract: A full-rail digital-read CIM circuit enables a weighted read operation on a single row of a memory array. A weighted read operation captures a value of a weight stored in the single memory array row without having to rely on weighted row-access. Rather, using full-rail access and a weighted sampling capacitance network, the CIM circuit enables the weighted read operation even under process variation, noise and mismatch.
-
公开(公告)号:US20180189632A1
公开(公告)日:2018-07-05
申请号:US15395758
申请日:2016-12-30
Applicant: RAGHAVAN KUMAR , GREGORY K. CHEN , HUSEYIN EKIN SUMBUL , RAM K. KRISHNAMURTHY , PHIL KNAG
Inventor: RAGHAVAN KUMAR , GREGORY K. CHEN , HUSEYIN EKIN SUMBUL , RAM K. KRISHNAMURTHY , PHIL KNAG
Abstract: Apparatus and method for a scalable, free running neuromorphic processor. For example, one embodiment of a neuromorphic processing apparatus comprises: a plurality of neurons; an interconnection network to communicatively couple at least a subset of the plurality of neurons; a spike controller to stochastically generate a trigger signal, the trigger signal to cause a selected neuron to perform a thresholding operation to determine whether to issue a spike signal.
-
公开(公告)号:US20190056885A1
公开(公告)日:2019-02-21
申请号:US16160482
申请日:2018-10-15
Applicant: Amrita Mathuriya , Sasikanth Manipatruni , Victor Lee , Huseyin Sumbul , Gregory Chen , Raghavan Kumar , Phil Knag , Ram Krishnamurthy , IAN YOUNG , Abhishek Sharma
Inventor: Amrita Mathuriya , Sasikanth Manipatruni , Victor Lee , Huseyin Sumbul , Gregory Chen , Raghavan Kumar , Phil Knag , Ram Krishnamurthy , IAN YOUNG , Abhishek Sharma
IPC: G06F3/06 , G06F12/0802 , G06F12/1081 , G06N3/04
Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory, bit-serial, mathematical operations performed by a pipelined SRAM architecture (bit-serial PISA) circuitry disposed in on-chip processor memory circuitry. The on-chip processor memory circuitry may include processor last level cache (LLC) circuitry. The bit-serial PISA circuitry is coupled to PISA memory circuitry via a relatively high-bandwidth connection to beneficially facilitate the storage and retrieval of layer weights by the bit-serial PISA circuitry during execution. Direct memory access (DMA) circuitry transfers the neural network model and input data from system memory to the bit-serial PISA memory and also transfers output data from the PISA memory circuitry to system memory circuitry. Thus, the systems and methods described herein beneficially leverage the on-chip processor memory circuitry to perform a relatively large number of vector/tensor calculations without burdening the processor circuitry.
-
公开(公告)号:US20190042159A1
公开(公告)日:2019-02-07
申请号:US16146878
申请日:2018-09-28
Applicant: Ian YOUNG , Ram KRISHNAMURTHY , Sasikanth MANIPATRUNI , Amrita MATHURIYA , Abhishek SHARMA , Raghavan KUMAR , Phil KNAG , Huseyin SUMBUL , Gregory CHEN
Inventor: Ian YOUNG , Ram KRISHNAMURTHY , Sasikanth MANIPATRUNI , Amrita MATHURIYA , Abhishek SHARMA , Raghavan KUMAR , Phil KNAG , Huseyin SUMBUL , Gregory CHEN
Abstract: Techniques and mechanisms for a memory device to perform in-memory computing based on a logic state which is detected with a voltage-controlled oscillator (VCO). In an embodiment, a VCO circuit of the memory device receives from a memory array a first signal indicating a logic state that is based on one or more currently stored data bits. The VCO provides a conversion from the logic state being indicated by a voltage characteristic of the first signal to the logic state being indicated by a corresponding frequency characteristic of a cyclical signal. Based on the frequency characteristic, the logic state is identified and communicated for use in an in-memory computation at the memory device. In another embodiment, a result of the in-memory computation is written back to the memory array.
-
-
-
-