-
公开(公告)号:US11812599B2
公开(公告)日:2023-11-07
申请号:US17670248
申请日:2022-02-11
Applicant: Intel Corporation
Inventor: Abhishek Sharma , Noriyuki Sato , Sarah Atanasov , Huseyin Ekin Sumbul , Gregory K. Chen , Phil Knag , Ram Krishnamurthy , Hui Jae Yoo , Van H. Le
IPC: G11C8/00 , H10B12/00 , H01L27/12 , G11C11/4096
CPC classification number: H10B12/00 , G11C11/4096 , H01L27/124 , H01L27/1207 , H01L27/1225 , H01L27/1255 , H01L27/1266
Abstract: Examples herein relate to a memory device comprising an eDRAM memory cell, the eDRAM memory cell can include a write circuit formed at least partially over a storage cell and a read circuit formed at least partially under the storage cell; a compute near memory device bonded to the memory device; a processor; and an interface from the memory device to the processor. In some examples, circuitry is included to provide an output of the memory device to emulate output read rate of an SRAM memory device comprises one or more of: a controller, a multiplexer, or a register. Bonding of a surface of the memory device can be made to a compute near memory device or other circuitry. In some examples, a layer with read circuitry can be bonded to a layer with storage cells. Any layers can be bonded together using techniques described herein.
-
公开(公告)号:US11727260B2
公开(公告)日:2023-08-15
申请号:US17484828
申请日:2021-09-24
Applicant: Intel Corporation
Inventor: Abhishek Sharma , Jack T. Kavalieros , Ian A. Young , Ram Krishnamurthy , Sasikanth Manipatruni , Uygar Avci , Gregory K. Chen , Amrita Mathuriya , Raghavan Kumar , Phil Knag , Huseyin Ekin Sumbul , Nazila Haratipour , Van H. Le
IPC: G06N3/063 , H01L27/108 , H01L27/11502 , G06N3/04 , G06F17/16 , H01L27/11 , G11C11/54 , G11C7/10 , G11C11/419 , G11C11/409 , G11C11/22 , G06N3/065 , H10B10/00 , H10B12/00 , H10B53/00
CPC classification number: G06N3/065 , G06F17/16 , G06N3/04 , G11C7/1006 , G11C7/1039 , G11C11/54 , H10B10/18 , H10B12/01 , H10B12/033 , H10B12/20 , H10B12/50 , H10B53/00 , G11C11/221 , G11C11/409 , G11C11/419
Abstract: An apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The memory array includes an embedded dynamic random access memory (eDRAM) memory array. Another apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The mathematical computation circuit includes a switched capacitor circuit. The switched capacitor circuit includes a back-end-of-line (BEOL) capacitor coupled to a thin film transistor within the metal/dielectric layers of the semiconductor chip. Another apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The mathematical computation circuit includes an accumulation circuit. The accumulation circuit includes a ferroelectric BEOL capacitor to store a value to be accumulated with other values stored by other ferroelectric BEOL capacitors.
-
公开(公告)号:US11195079B2
公开(公告)日:2021-12-07
申请号:US15821123
申请日:2017-11-22
Applicant: INTEL CORPORATION
Inventor: Huseyin E. Sumbul , Gregory K. Chen , Phil Knag , Raghavan Kumar , Ram K. Krishnamurthy
Abstract: In one embodiment, a processor comprises a first neuro-synaptic core comprising first circuitry to configure the first neuro-synaptic core as a neuron core responsive to a first value specified by a configuration parameter; and configure the first neuro-synaptic core as a synapse core responsive to a second value specified by the configuration parameter.
-
公开(公告)号:US11157799B2
公开(公告)日:2021-10-26
申请号:US16299014
申请日:2019-03-11
Applicant: Intel Corporation
Inventor: Huseyin E. Sumbul , Gregory K. Chen , Raghavan Kumar , Phil Christopher Knag , Ram Krishnamurthy
Abstract: A neuromorphic computing system is provided which comprises: a synapse core; and a pre-synaptic neuron, a first post-synaptic neuron, and a second post-synaptic neuron coupled to the synaptic core, wherein the synapse core is to: receive a request from the pre-synaptic neuron, generate, in response to the request, a first address of the first post-synaptic neuron and a second address of the second post-synaptic neuron, wherein the first address and the second address are not stored in the synapse core prior to receiving the request.
-
公开(公告)号:US10922607B2
公开(公告)日:2021-02-16
申请号:US15394976
申请日:2016-12-30
Applicant: Intel Corporation
Inventor: Abhronil Sengupta , Gregory K. Chen , Raghavan Kumar , Huseyin Ekin Sumbul , Phil Knag
Abstract: In one embodiment, a processor is to store a membrane potential of a neural unit of a neural network; and calculate, at a particular time-step of the neural network, a change to the membrane potential of the neural unit occurring over multiple time-steps that have elapsed since the last time-step at which the membrane potential was updated, wherein each of the multiple time-steps that have elapsed since the last time-step is associated with at least one input to the neural unit that affects the membrane potential of the neural unit.
-
公开(公告)号:US10831446B2
公开(公告)日:2020-11-10
申请号:US16145569
申请日:2018-09-28
Applicant: Intel Corporation
Inventor: Gregory K. Chen , Raghavan Kumar , Huseyin Ekin Sumbul , Phil Knag , Ram Krishnamurthy , Sasikanth Manipatruni , Amrita Mathuriya , Abhishek Sharma , Ian A. Young
Abstract: A memory device that includes a plurality subarrays of memory cells to store static weights and a plurality of digital full-adder circuits between subarrays of memory cells is provided. The digital full-adder circuit in the memory device eliminates the need to move data from a memory device to a processor to perform machine learning calculations. Rows of full-adder circuits are distributed between sub-arrays of memory cells to increase the effective memory bandwidth and reduce the time to perform matrix-vector multiplications in the memory device by performing bit-serial dot-product primitives in the form of accumulating m 1-bit×n-bit multiplications.
-
公开(公告)号:US20190043560A1
公开(公告)日:2019-02-07
申请号:US16146473
申请日:2018-09-28
Applicant: Intel Corporation
Inventor: Huseyin Ekin Sumbul , Gregory K. Chen , Raghavan Kumar , Phil Knag , Abhishek Sharma , Sasikanth Manipatruni , Amrita Mathuriya , Ram Krishnamurthy , Ian A. Young
IPC: G11C11/418 , G06F7/544 , G06F9/30 , G11C13/00 , G11C11/419
Abstract: A memory circuit has compute-in-memory circuitry that enables a multiply-accumulate (MAC) operation based on shared charge. Row access circuitry drives multiple rows of a memory array to multiply a first data word with a second data word stored in the memory array. The row access circuitry drives the multiple rows based on the bit pattern of the first data word. Column access circuitry drives a column of the memory array when the rows are driven. Accessed rows discharge the column line in an accumulative fashion. Sensing circuitry can sense voltage on the column line. A processor in the memory circuit computes a MAC value based on the voltage sensed on the column.
-
公开(公告)号:US09866476B2
公开(公告)日:2018-01-09
申请号:US14574106
申请日:2014-12-17
Applicant: Intel Corporation
Inventor: Mark A. Anders , Gregory K. Chen , Himanshu Kaul
IPC: H04L12/771 , H04L12/801 , H04L12/933 , H04L12/721 , H04L12/773 , H04L12/947
Abstract: A first packet and a first direction associated with the first packet are received. The first packet is forwarded to an output port of a plurality of output ports of the first router based on the first direction associated with the first packet. A second direction associated with the first packet is determined. The second direction is based at least on an address of the first packet. The first packet and the second direction are forwarded through the output port of the first router to a second router.
-
9.
公开(公告)号:US20170286829A1
公开(公告)日:2017-10-05
申请号:US15088198
申请日:2016-04-01
Applicant: Intel Corporation
Inventor: Gregory K. Chen , Raghavan Kumar , Huseyin E. Sumbul
Abstract: Systems and methods for event-driven learning with spike timing dependent plasticity in neuromorphic computers are disclosed. A neuromorphic processor includes a synapse coupled to a pre-synaptic neuron and coupled to a post-synaptic neuron, the synapse including a synapse memory to store a synapse weight and synapse spike timing dependent plasticity (STDP) circuit coupled to the synapse memory. The pre-synaptic neuron includes a pre-synaptic neuron memory to store a pre-synaptic neuron spike history and a pre-synaptic neuron STDP circuit coupled to the pre-synaptic neuron memory, the pre-synaptic neuron STDP circuit to, in response to the pre-synaptic neuron firing, initiate performing long term potentiation. The post-synaptic neuron includes a post-synaptic neuron memory storing a post-synaptic neuron spike history and a post-synaptic neuron STDP circuit coupled to the post-synaptic neuron memory to, in response to the post-synaptic neuron firing, initiate performing long term depression.
-
公开(公告)号:US09680765B2
公开(公告)日:2017-06-13
申请号:US14574258
申请日:2014-12-17
Applicant: Intel Corporation
Inventor: Himanshu Kaul , Gregory K. Chen , Mark A. Anders
IPC: H04L12/58 , H04L12/913 , H04L12/933 , H04L12/935
CPC classification number: H04L47/724 , G06F15/7825 , H04L12/50 , H04L12/6402 , H04L12/66 , H04L49/109 , H04L49/3018 , H04L49/3027
Abstract: An apparatus may comprise a plurality of ports and a plurality of channel reservation banks. A channel reservation bank is to be associated with a port of the plurality of ports. The channel reservation bank is to comprise a plurality of channel reservation slots. The port of the plurality of ports is to comprise a plurality of circuit-switched channels through the port. The configuration of each of the plurality of circuit-switched channels to be based on information stored in a channel reservation slot of the channel reservation bank to be associated with the port.
-
-
-
-
-
-
-
-
-