-
公开(公告)号:US20210043567A1
公开(公告)日:2021-02-11
申请号:US16534104
申请日:2019-08-07
Applicant: Intel Corporation
Inventor: Mark ANDERS , Himanshu KAUL , Ram KRISHNAMURTHY , Kevin Lai LIN , Mauro KOBRINSKY
IPC: H01L23/528 , G06F17/50 , G11C5/06
Abstract: Embodiments disclosed herein include a semiconductor device with interconnects with non-uniform heights. In an embodiment, the semiconductor device comprises a semiconductor substrate, and a back end of line (BEOL) stack over the semiconductor substrate. In an embodiment, the BEOL stack comprises first interconnects and second interconnects in an interconnect layer of the BEOL stack. In an embodiment, the first interconnects have a first height and the second interconnects have a second height that is different than the first height.
-
公开(公告)号:US20190303750A1
公开(公告)日:2019-10-03
申请号:US16443548
申请日:2019-06-17
Applicant: Intel Corporation
Inventor: Raghavan KUMAR , Gregory K. CHEN , Huseyin Ekin SUMBUL , Phil KNAG , Ram KRISHNAMURTHY
Abstract: Examples described herein relate to a neural network whose weights from a matrix are selected from a set of weights stored in a memory on-chip with a processing engine for generating multiply and carry operations. The number of weights in the set of weights stored in the memory can be less than a number of weights in the matrix thereby reducing an amount of memory used to store weights in a matrix. The weights in the memory can be generated in training using gradients from back propagation. Weights in the memory can be selected using a tabulation hash calculation on entries in a table.
-
公开(公告)号:US20230334006A1
公开(公告)日:2023-10-19
申请号:US18212079
申请日:2023-06-20
Applicant: Intel Corporation
Inventor: Huseyin Ekin SUMBUL , Gregory K. CHEN , Phil KNAG , Raghavan KUMAR , Ram KRISHNAMURTHY
CPC classification number: G06F15/8046 , G06F17/153 , G06N3/063
Abstract: A compute near memory (CNM) convolution accelerator enables a convolutional neural network (CNN) to use dedicated acceleration to achieve efficient in-place convolution operations with less impact on memory and energy consumption. A 2D convolution operation is reformulated as 1D row-wise convolution. The 1D row-wise convolution enables the CNM convolution accelerator to process input activations row-by-row, while using the weights one-by-one. Lightweight access circuits provide the ability to stream both weights and input rows as vectors to MAC units, which in turn enables modules of the CNM convolution accelerator to implement convolution for both [1×1] and chosen [n×n] sized filters.
-
公开(公告)号:US20230297819A1
公开(公告)日:2023-09-21
申请号:US18201291
申请日:2023-05-24
Applicant: Intel Corporation
Inventor: Ram KRISHNAMURTHY , Gregory K. CHEN , Raghavan KUMAR , Phil KNAG , Huseyin Ekin SUMBUL , Deepak Vinayak KADETOTAD
CPC classification number: G06N3/063 , G06F17/16 , G06N3/04 , G06F7/5443
Abstract: An apparatus is described. The apparatus includes a circuit to process a binary neural network. The circuit includes an array of processing cores, wherein, processing cores of the array of processing cores are to process different respective areas of a weight matrix of the binary neural network. The processing cores each include add circuitry to add only those weights of an i layer of the binary neural network that are to be effectively multiplied by a non zero nodal output of an i−1 layer of the binary neural network.
-
公开(公告)号:US20200034148A1
公开(公告)日:2020-01-30
申请号:US16586975
申请日:2019-09-28
Applicant: Intel Corporation
Inventor: Huseyin Ekin SUMBUL , Gregory K. CHEN , Phil KNAG , Raghavan KUMAR , Ram KRISHNAMURTHY
Abstract: A compute near memory (CNM) convolution accelerator enables a convolutional neural network (CNN) to use dedicated acceleration to achieve efficient in-place convolution operations with less impact on memory and energy consumption. A 2D convolution operation is reformulated as 1D row-wise convolution. The 1D row-wise convolution enables the CNM convolution accelerator to process input activations row-by-row, while using the weights one-by-one. Lightweight access circuits provide the ability to stream both weights and input row as vectors to MAC units, which in turn enables modules of the CNM convolution accelerator to implement convolution for both [1×1] and chosen [n×n] sized filters.
-
公开(公告)号:US20190065151A1
公开(公告)日:2019-02-28
申请号:US16145569
申请日:2018-09-28
Applicant: Intel Corporation
Inventor: Gregory K. CHEN , Raghavan KUMAR , Huseyin Ekin SUMBUL , Phil KNAG , Ram KRISHNAMURTHY , Sasikanth MANIPATRUNI , Amrita MATHURIYA , Abhishek SHARMA , Ian A. YOUNG
Abstract: A memory device that includes a plurality subarrays of memory cells to store static weights and a plurality of digital full-adder circuits between subarrays of memory cells is provided. The digital full-adder circuit in the memory device eliminates the need to move data from a memory device to a processor to perform machine learning calculations. Rows of full-adder circuits are distributed between sub-arrays of memory cells to increase the effective memory bandwidth and reduce the time to perform matrix-vector multiplications in the memory device by performing bit-serial dot-product primitives in the form of accumulating m 1-bit x n-bit multiplications.
-
7.
公开(公告)号:US20190042949A1
公开(公告)日:2019-02-07
申请号:US16147143
申请日:2018-09-28
Applicant: Intel Corporation
Inventor: Ian A. YOUNG , Ram KRISHNAMURTHY , Sasikanth MANIPATRUNI , Gregory K. CHEN , Amrita MATHURIYA , Abhishek SHARMA , Raghavan KUMAR , Phil KNAG , Huseyin Ekin SUMBUL
Abstract: A semiconductor chip is described. The semiconductor chip includes a compute-in-memory (CIM) circuit to implement a neural network in hardware. The semiconductor chip also includes at least one output that presents samples of voltages generated at a node of the CIM circuit in response to a range of neural network input values applied to the CIM circuit to optimize the CIM circuit for the neural network.
-
公开(公告)号:US20230401434A1
公开(公告)日:2023-12-14
申请号:US18237887
申请日:2023-08-24
Applicant: Intel Corporation
Inventor: Ram KRISHNAMURTHY , Gregory K. CHEN , Raghavan KUMAR , Phil KNAG , Huseyin Ekin SUMBUL
CPC classification number: G06N3/063 , G06F7/523 , G06F9/3893 , G06F9/30098 , G06F7/5095 , G06F7/5443
Abstract: An apparatus is described. The apparatus includes a long short term memory (LSTM) circuit having a multiply accumulate circuit (MAC). The MAC circuit has circuitry to rely on a stored product term rather than explicitly perform a multiplication operation to determine the product term if an accumulation of differences between consecutive, preceding input values has not reached a threshold.
-
公开(公告)号:US20220165735A1
公开(公告)日:2022-05-26
申请号:US17670248
申请日:2022-02-11
Applicant: Intel Corporation
Inventor: Abhishek SHARMA , Noriyuki SATO , Sarah ATANASOV , Huseyin Ekin SUMBUL , Gregory K. CHEN , Phil KNAG , Ram KRISHNAMURTHY , Hui Jae YOO , Van H. LE
IPC: H01L27/108 , H01L27/12 , G11C11/4096
Abstract: Examples herein relate to a memory device comprising an eDRAM memory cell, the eDRAM memory cell can include a write circuit formed at least partially over a storage cell and a read circuit formed at least partially under the storage cell; a compute near memory device bonded to the memory device; a processor; and an interface from the memory device to the processor. In some examples, circuitry is included to provide an output of the memory device to emulate output read rate of an SRAM memory device comprises one or more of: a controller, a multiplexer, or a register. Bonding of a surface of the memory device can be made to a compute near memory device or other circuitry. In some examples, a layer with read circuitry can be bonded to a layer with storage cells. Any layers can be bonded together using techniques described herein.
-
公开(公告)号:US20210043500A1
公开(公告)日:2021-02-11
申请号:US16534063
申请日:2019-08-07
Applicant: Intel Corporation
Inventor: Kevin Lai LIN , Mauro KOBRINSKY , Mark ANDERS , Himanshu KAUL , Ram KRISHNAMURTHY
IPC: H01L21/768 , H01L23/528
Abstract: Embodiments disclosed herein include interconnect layers that include non-uniform interconnect heights and methods of forming such devices. In an embodiment, an interconnect layer comprises an interlayer dielectric (ILD), a first interconnect disposed in the ILD, wherein the first interconnect has a first height, and a second interconnect disposed in the ILD, wherein the second interconnect has a second height that is different than the first height.
-
-
-
-
-
-
-
-
-