-
1.
公开(公告)号:US11030108B2
公开(公告)日:2021-06-08
申请号:US16540163
申请日:2019-08-14
Applicant: Intel Corporation
Inventor: Berkin Akin , Rajat Agarwal , Jong Soo Park , Christopher J. Hughes , Chiachen Chou
IPC: G06F12/08 , G06F12/0888 , G06F12/04 , G06F12/0811 , G06F12/0831 , G06F12/0886
Abstract: In an embodiment, a processor includes a sparse access buffer having a plurality of entries each to store for a memory access instruction to a particular address, address information and count information; and a memory controller to issue read requests to a memory, the memory controller including a locality controller to receive a memory access instruction having a no-locality hint and override the no-locality hint based at least in part on the count information stored in an entry of the sparse access buffer. Other embodiments are described and claimed.
-
公开(公告)号:US10068636B2
公开(公告)日:2018-09-04
申请号:US15394860
申请日:2016-12-30
Applicant: Intel Corporation
Inventor: Berkin Akin , Shigeki Tomishima
IPC: G11C7/22 , G11C11/4093 , G11C11/4091 , G11C11/4094
CPC classification number: G11C11/4093 , G11C7/1039 , G11C7/22 , G11C11/4076 , G11C11/4091 , G11C11/4096 , G11C2207/2209 , G11C2207/2245
Abstract: The present disclosure relates to a dynamic random access memory (DRAM) array, which comprises a plurality of bit lines connectable, respectively, to at least two row buffers of the DRAM array. The two row buffers are respectively connectable to data input/output (I/O) lines and are configured to electrically connect the two row buffers to the bit lines and data I/O lines in a mutually exclusive manner.
-
公开(公告)号:US11853758B2
公开(公告)日:2023-12-26
申请号:US16585521
申请日:2019-09-27
Applicant: Intel Corporation
Inventor: Berkin Akin , Alaa R. Alameldeen
IPC: G06F9/30
CPC classification number: G06F9/3004 , G06F9/30087
Abstract: Techniques for decoupled access-execute near-memory processing include examples of first or second circuitry of a near-memory processor receiving instructions that cause the first circuitry to implement system memory access operations to access one or more data chunks and the second circuitry to implement compute operations using the one or more data chunks.
-
公开(公告)号:US11366998B2
公开(公告)日:2022-06-21
申请号:US15937486
申请日:2018-03-27
Applicant: Intel Corporation
Inventor: Seth Pugsley , Berkin Akin
Abstract: Systems and techniques for neuromorphic accelerator multitasking are described herein. A neuron address translation unit (NATU) may receive a spike message. Here, the spike message includes a physical neuron identifier (PNID) of a neuron causing the spike. The NATU may then translate the PNID into a network identifier (NID) and a local neuron identifier (LNID). The NATU locates synapse data based on the NID and communicates the synapse data and the LNID to an axon processor.
-
5.
公开(公告)号:US10409727B2
公开(公告)日:2019-09-10
申请号:US15475249
申请日:2017-03-31
Applicant: Intel Corporation
Inventor: Berkin Akin , Rajat Agarwal , Jong Soo Park , Christopher J. Hughes , Chiachen Chou
IPC: G06F12/08 , G06F12/0888 , G06F12/0811 , G06F12/04 , G06F12/0831 , G06F12/0886
Abstract: In an embodiment, a processor includes a sparse access buffer having a plurality of entries each to store for a memory access instruction to a particular address, address information and count information; and a memory controller to issue read requests to a memory, the memory controller including a locality controller to receive a memory access instruction having a no-locality hint and override the no-locality hint based at least in part on the count information stored in an entry of the sparse access buffer. Other embodiments are described and claimed.
-
公开(公告)号:US10908906B2
公开(公告)日:2021-02-02
申请号:US16024530
申请日:2018-06-29
Applicant: Intel Corporation
Inventor: Berkin Akin
Abstract: An apparatus and method for a tensor permutation engine. The TPE may include a read address generation unit (AGU) to generate a plurality of read addresses for the plurality of tensor data elements in a first storage and a write AGU to generate a plurality of write addresses for the plurality of tensor data elements in the first storage. The TPE may include a shuffle register bank comprising a register to read tensor data elements from the plurality of read addresses generated by the read AGU, a first register bank to receive the tensor data elements, and a shift register to receive a lowest tensor data element from each bank in the first register bank, each tensor data element in the shift register to be written to a write address from the plurality of write addresses generated by the write AGU.
-
公开(公告)号:US20190042920A1
公开(公告)日:2019-02-07
申请号:US15853282
申请日:2017-12-22
Applicant: Intel Corporation
Inventor: Berkin Akin , Seth Pugsley
IPC: G06N3/04 , G06F12/0868
Abstract: System configurations and techniques for implementation of a neural network in neuromorphic hardware with use of external memory resources are described herein. In an example, a system for processing spiking neural network operations includes: a plurality of neural processor clusters to maintain neurons of the neural network, with the clusters including circuitry to determine respective states of the neurons and internal memory to store the respective states of the neurons; and a plurality of axon processors to process synapse data of synapses in the neural network, with the processors including circuitry to retrieve synapse data of respective synapses from external memory, evaluate the synapse data based on a received spike message, and propagate another spike message to another neuron based on the synapse data. Further details for use and access of the external memory and processing configurations for such neural network operations are also disclosed.
-
公开(公告)号:US20190042915A1
公开(公告)日:2019-02-07
申请号:US15941621
申请日:2018-03-30
Applicant: Intel Corporation
Inventor: Berkin Akin , Seth Pugsley
Abstract: Systems and techniques for procedural neural network synaptic connection modes are described herein. A synapse list header may be loaded based on a received spike indication. A spike target generator may then execute a generator function identified in the synapse list header to produce a spike message. Here, the generator function accepts a current synapse value as input to produce the spike message. The spike message may then be communicated a neuron.
-
9.
公开(公告)号:US20180190339A1
公开(公告)日:2018-07-05
申请号:US15394860
申请日:2016-12-30
Applicant: Intel Corporation
Inventor: Berkin Akin , Shigeki Tomishima
IPC: G11C11/4093 , G11C11/4091 , G11C11/4094
CPC classification number: G11C11/4093 , G11C7/22 , G11C11/4091 , G11C11/4094
Abstract: The present disclosure relates to a dynamic random access memory (DRAM) array, which comprises a plurality of bit lines connectable, respectively, to at least two row buffers of the DRAM array. The two row buffers are respectively connectable to data input/output (I/O) lines and are configured to electrically connect the two row buffers to the bit lines and data I/O lines in a mutually exclusive manner.
-
公开(公告)号:US11681528B2
公开(公告)日:2023-06-20
申请号:US17131424
申请日:2020-12-22
Applicant: INTEL CORPORATION
Inventor: Berkin Akin
CPC classification number: G06F9/3013 , G06F9/30032 , G06F9/30036 , G06F9/30043 , G06F9/30134 , G06F9/345
Abstract: An apparatus and method for a tensor permutation engine. The TPE may include a read address generation unit (AGU) to generate a plurality of read addresses for the plurality of tensor data elements in a first storage and a write AGU to generate a plurality of write addresses for the plurality of tensor data elements in the first storage. The TPE may include a shuffle register bank comprising a register to read tensor data elements from the plurality of read addresses generated by the read AGU, a first register bank to receive the tensor data elements, and a shift register to receive a lowest tensor data element from each bank in the first register bank, each tensor data element in the shift register to be written to a write address from the plurality of write addresses generated by the write AGU.
-
-
-
-
-
-
-
-
-