Invention Grant
- Patent Title: Low synch dedicated accelerator with in-memory computation capability
-
Application No.: US16160482Application Date: 2018-10-15
-
Publication No.: US11416165B2Publication Date: 2022-08-16
- Inventor: Amrita Mathuriya , Sasikanth Manipatruni , Victor Lee , Huseyin Sumbul , Gregory Chen , Raghavan Kumar , Phil Knag , Ram Krishnamurthy , Ian Young , Abhishek Sharma
- Applicant: INTEL CORPORATION
- Applicant Address: US CA Santa Clara
- Assignee: INTEL CORPORATION
- Current Assignee: INTEL CORPORATION
- Current Assignee Address: US CA Santa Clara
- Agency: Trop, Pruner & Hu, P.C.
- Main IPC: G06F12/00
- IPC: G06F12/00 ; G06F3/06 ; G06F12/1081 ; G06N3/04 ; G06F12/0802 ; G06N3/063 ; G06F12/0875 ; G06F12/0897

Abstract:
The present disclosure is directed to systems and methods of implementing a neural network using in-memory, bit-serial, mathematical operations performed by a pipelined SRAM architecture (bit-serial PISA) circuitry disposed in on-chip processor memory circuitry. The on-chip processor memory circuitry may include processor last level cache (LLC) circuitry. The bit-serial PISA circuitry is coupled to PISA memory circuitry via a relatively high-bandwidth connection to beneficially facilitate the storage and retrieval of layer weights by the bit-serial PISA circuitry during execution. Direct memory access (DMA) circuitry transfers the neural network model and input data from system memory to the bit-serial PISA memory and also transfers output data from the PISA memory circuitry to system memory circuitry. Thus, the systems and methods described herein beneficially leverage the on-chip processor memory circuitry to perform a relatively large number of vector/tensor calculations without burdening the processor circuitry.
Public/Granted literature
- US20190056885A1 LOW SYNCH DEDICATED ACCELERATOR WITH IN-MEMORY COMPUTATION CAPABILITY Public/Granted day:2019-02-21
Information query