-
21.
公开(公告)号:US20180189645A1
公开(公告)日:2018-07-05
申请号:US15394897
申请日:2016-12-30
Applicant: Intel Corporation
Inventor: Gregory K. Chen , Raghavan Kumar , Huseyin Ekin Sumbul , Phil Knag , Ram K. Krishnamurthy
CPC classification number: G06N3/0635 , G06N3/0445 , G06N3/049 , G06N3/063
Abstract: In one embodiment, a method comprises receiving a selection of a neural network topology type; identifying a synapse memory mapping scheme for the selected neural network topology type from a plurality of synapse memory mapping schemes that are each associated with a respective neural network topology type; and mapping a plurality of synapse weights to locations in a memory based on the identified synapse memory mapping scheme.
-
公开(公告)号:US20180089557A1
公开(公告)日:2018-03-29
申请号:US15276111
申请日:2016-09-26
Applicant: INTEL CORPORATION
Inventor: Raghavan Kumar , Gregory K. Chen , Huseyin Ekin Sumbul , Phil Knag
Abstract: An integrated circuit (IC), as a computation block of a neuromorphic system, includes a time step controller to activate a time step update signal for performing a time-multiplexed selection of a group of neuromorphic states to update. The IC includes a first circuitry to, responsive to detecting the time step update signal for a selected group of neuromorphic states: generate an outgoing data signal in response to determining that a first membrane potential of the selected group of neuromorphic states exceeds a threshold value, wherein the outgoing data signal includes an identifier that identifies the selected group of neuromorphic states and a memory address (wherein the memory address corresponds to a location in a memory block associated with the integrated circuit), and update a state of the selected group of neuromorphic states in response to generation of the outgoing data signal.
-
公开(公告)号:US11751404B2
公开(公告)日:2023-09-05
申请号:US16141025
申请日:2018-09-25
Applicant: Intel Corporation
Inventor: Abhishek Sharma , Gregory Chen , Phil Knag , Ram Krishnamurthy , Raghavan Kumar , Sasikanth Manipatruni , Amrita Mathuriya , Huseyin Sumbul , Ian A. Young
CPC classification number: H10B63/30 , H01L29/66795 , H01L29/785 , H10N70/021 , H10N70/826 , H10N70/882 , H10N70/8833
Abstract: Embodiments herein describe techniques for a semiconductor device including a RRAM memory cell. The RRAM memory cell includes a FinFET transistor and a RRAM storage cell. The FinFET transistor includes a fin structure on a substrate, where the fin structure includes a channel region, a source region, and a drain region. An epitaxial layer is around the source region or the drain region. A RRAM storage stack is wrapped around a surface of the epitaxial layer. The RRAM storage stack includes a resistive switching material layer in contact and wrapped around the surface of the epitaxial layer, and a contact electrode in contact and wrapped around a surface of the resistive switching material layer. The epitaxial layer, the resistive switching material layer, and the contact electrode form a RRAM storage cell. Other embodiments may be described and/or claimed.
-
公开(公告)号:US11699681B2
公开(公告)日:2023-07-11
申请号:US16727779
申请日:2019-12-26
Applicant: Intel Corporation
Inventor: Abhishek Sharma , Hui Jae Yoo , Van H. Le , Huseyin Ekin Sumbul , Phil Knag , Gregory K. Chen , Ram Krishnamurthy
IPC: H01L25/065 , G11C11/407
CPC classification number: H01L25/0657 , G11C11/407 , H01L2224/32145 , H01L2224/32225
Abstract: An apparatus is formed. The apparatus includes a stack of semiconductor chips. The stack of semiconductor chips includes a logic chip and a memory stack, wherein, the logic chip includes at least one of a GPU and CPU. The apparatus also includes a semiconductor chip substrate. The stack of semiconductor chips are mounted on the semiconductor chip substrate. At least one other logic chip is mounted on the semiconductor chip substrate. The semiconductor chip substrate includes wiring to interconnect the stack of semiconductor chips to the at least one other logic chip.
-
公开(公告)号:US11625584B2
公开(公告)日:2023-04-11
申请号:US16443548
申请日:2019-06-17
Applicant: Intel Corporation
Inventor: Raghavan Kumar , Gregory K. Chen , Huseyin Ekin Sumbul , Phil Knag , Ram Krishnamurthy
Abstract: Examples described herein relate to a neural network whose weights from a matrix are selected from a set of weights stored in a memory on-chip with a processing engine for generating multiply and carry operations. The number of weights in the set of weights stored in the memory can be less than a number of weights in the matrix thereby reducing an amount of memory used to store weights in a matrix. The weights in the memory can be generated in training using gradients from back propagation. Weights in the memory can be selected using a tabulation hash calculation on entries in a table.
-
26.
公开(公告)号:US11347477B2
公开(公告)日:2022-05-31
申请号:US16586648
申请日:2019-09-27
Applicant: Intel Corporation
Inventor: Huseyin Ekin Sumbul , Gregory K. Chen , Phil Knag , Raghavan Kumar , Ram Krishnamurthy
Abstract: A memory circuit includes a number (X) of multiply-accumulate (MAC) circuits that are dynamically configurable. The MAC circuits can either compute an output based on computations of X elements of the input vector with the weight vector, or to compute the output based on computations of a single element of the input vector with the weight vector, with each element having a one bit or multibit length. A first memory can hold the input vector having a width of X elements and a second memory can store the weight vector. The MAC circuits include a MAC array on chip with the first memory.
-
公开(公告)号:US11151046B2
公开(公告)日:2021-10-19
申请号:US16921685
申请日:2020-07-06
Applicant: Intel Corporation
Inventor: Amrita Mathuriya , Sasikanth Manipatruni , Victor Lee , Huseyin Sumbul , Gregory Chen , Raghavan Kumar , Phil Knag , Ram Krishnamurthy , Ian Young , Abhishek Sharma
Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution. Thus, the systems and methods described herein beneficially leverage the on-chip processor memory circuitry to perform a relatively large number of in-memory vector/tensor calculations in furtherance of neural network processing without burdening the processor circuitry.
-
公开(公告)号:US10860682B2
公开(公告)日:2020-12-08
申请号:US16839013
申请日:2020-04-02
Applicant: Intel Corporation
Inventor: Phil Knag , Gregory K. Chen , Raghavan Kumar , Huseyin Ekin Sumbul , Abhishek Sharma , Sasikanth Manipatruni , Amrita Mathuriya , Ram Krishnamurthy , Ian A. Young
IPC: G06F17/16 , G11C11/419 , G11C11/418 , G11C7/12 , G11C8/08 , G06G7/16 , G06G7/22 , G11C11/56 , G06F9/30 , G11C7/10 , G06N3/063
Abstract: A binary CIM circuit enables all memory cells in a memory array to be effectively accessible simultaneously for computation using fixed pulse widths on the wordlines and equal capacitance on the bitlines. The fixed pulse widths and equal capacitance ensure that a minimum voltage drop in the bitline represents one least significant bit (LSB) so that the bitline voltage swing remains safely within the maximum allowable range. The binary CIM circuit maximizes the effective memory bandwidth of a memory array for a given maximum voltage range of bitline voltage.
-
公开(公告)号:US20200334161A1
公开(公告)日:2020-10-22
申请号:US16921685
申请日:2020-07-06
Applicant: Intel Corporation
Inventor: Amrita Mathuriya , Sasikanth Manipatruni , Victor Lee , Huseyin Sumbul , Gregory Chen , Raghavan Kumar , Phil Knag , Ram Krishnamurthy , IAN YOUNG , Abhishek Sharma
Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution. Thus, the systems and methods described herein beneficially leverage the on-chip processor memory circuitry to perform a relatively large number of in-memory vector/tensor calculations in furtherance of neural network processing without burdening the processor circuitry.
-
公开(公告)号:US10565138B2
公开(公告)日:2020-02-18
申请号:US16146534
申请日:2018-09-28
Applicant: Intel Corporation
Inventor: Jack Kavalieros , Ram Krishnamurthy , Sasikanth Manipatruni , Gregory Chen , Van Le , Amrita Mathuriya , Abhishek Sharma , Raghavan Kumar , Phil Knag , Huseyin Sumbul , Ian Young
IPC: G11C8/00 , G06F13/16 , H01L25/18 , H03K19/21 , G11C11/408 , H01L23/522 , G11C11/419
Abstract: Techniques and mechanisms for providing data to be used in an in-memory computation at a memory device. In an embodiment a memory device comprises a first memory array and circuitry, coupled to the first memory array, to perform a data computation based on data stored at the first memory array. Prior to the computation, the first memory array receives the data from a second memory array of the memory device. The second memory array extends horizontally in parallel with, but is offset vertically from, the first memory array. In another embodiment, a single integrated circuit die includes both the first memory array and the second memory array.
-
-
-
-
-
-
-
-
-