-
公开(公告)号:US20230186067A1
公开(公告)日:2023-06-15
申请号:US18167366
申请日:2023-02-10
Inventor: Surinder Pal SINGH , Thomas BOESCH , Giuseppe DESOLI
IPC: G06N3/063 , G06T7/62 , G06T7/11 , G06F16/901 , G06F9/38 , G06N3/08 , G06T15/08 , G06V10/82 , G06F18/22 , G06N3/045
CPC classification number: G06N3/063 , G06T7/62 , G06T7/11 , G06F16/9024 , G06F9/3877 , G06N3/08 , G06T15/08 , G06V10/82 , G06F18/22 , G06N3/045
Abstract: A device include on-board memory, an applications processor, a digital signal processor (DSP) cluster, a configurable accelerator framework (CAF), and a communication bus architecture. The communication bus communicatively couples the applications processor, the DSP cluster, and the CAF to the on-board memory. The CAF includes a reconfigurable stream switch and data volume sculpting circuitry, which has an input and an output coupled to the reconfigurable stream switch. The data volume sculpting circuitry receives a series of frames, each frame formed as a two dimensional (2D) data structure, and determines a first dimension and a second dimension of each frame of the series of frames. Based on the first and second dimensions, the data volume sculpting circuitry determines for each frame a position and a size of a region-of-interest to be extracted from the respective frame, and extracts from each frame, data in the frame that is within the region-of-interest.
-
公开(公告)号:US20230135185A1
公开(公告)日:2023-05-04
申请号:US18055245
申请日:2022-11-14
Inventor: Surinder Pal SINGH , Thomas BOESCH , Giuseppe DESOLI
Abstract: A convolutional neural network includes a pooling unit. The pooling unit performs pooling operations between convolution layers of the convolutional neural network. The pooling unit includes hardware blocks that promote computational and area efficiency in the convolutional neural network.
-
公开(公告)号:US20210264250A1
公开(公告)日:2021-08-26
申请号:US16799671
申请日:2020-02-24
Inventor: Surinder Pal SINGH , Thomas BOESCH , Giuseppe DESOLI
Abstract: A convolutional neural network includes a pooling unit. The pooling unit performs pooling operations between convolution layers of the convolutional neural network. The pooling unit includes hardware blocks that promote computational and area efficiency in the convolutional neural network.
-
公开(公告)号:US20190266784A1
公开(公告)日:2019-08-29
申请号:US16280963
申请日:2019-02-20
Inventor: Surinder Pal SINGH , Thomas BOESCH , Giuseppe DESOLI
Abstract: Embodiments of a device include on-board memory, an applications processor, a digital signal processor (DSP) cluster, a configurable accelerator framework (CAF), and at least one communication bus architecture. The communication bus communicatively couples the applications processor, the DSP cluster, and the CAF to the on-board memory. The CAF includes a reconfigurable stream switch and a data volume sculpting unit, which has an input and an output coupled to the reconfigurable stream switch. The data volume sculpting unit has a counter, a comparator, and a controller. The data volume sculpting unit is arranged to receive a stream of feature map data that forms a three-dimensional (3D) feature map. The 3D feature map is formed as a plurality of two-dimensional (2D) data planes. The data volume sculpting unit is also arranged to identify a 3D volume within the 3D feature map that is dimensionally smaller than the 3D feature map and isolate data from the 3D feature map that is within the 3D volume for processing in a deep learning algorithm.
-
公开(公告)号:US20240330399A1
公开(公告)日:2024-10-03
申请号:US18194108
申请日:2023-03-31
Applicant: STMicroelectronics International N.V.
Inventor: Carmine CAPPETTA , Surinder Pal SINGH , Giuseppe DESOLI , Thomas BOESCH
IPC: G06F17/15
CPC classification number: G06F17/15
Abstract: A neural network includes an internal storage unit. The internal storage unit stores feature data received from a memory external to the neural network. The internal storage unit reads the feature data to a hardware accelerator of the neural network. The internal storage unit adapts a storage pattern of the feature data and a read pattern of the feature data to enhance the efficiency of the hardware accelerator.
-
公开(公告)号:US20210081773A1
公开(公告)日:2021-03-18
申请号:US17023144
申请日:2020-09-16
Inventor: Nitin CHAWLA , Giuseppe DESOLI , Manuj AYODHYAWASI , Thomas BOESCH , Surinder Pal SINGH
IPC: G06N3/063 , G06F1/08 , G06F1/324 , G06F9/50 , G06N3/08 , G06F1/3228 , G06F1/3296
Abstract: Systems and devices are provided to increase computational and/or power efficiency for one or more neural networks via a computationally driven closed-loop dynamic clock control. A clock frequency control word is generated based on information indicative of a current frame execution rate of a processing task of the neural network and a reference clock signal. A clock generator generates the clock signal of neural network based on the clock frequency control word. A reference frequency may be used to generate the clock frequency control word, and the reference frequency may be based on information indicative of a sparsity of data of a training frame.
-
公开(公告)号:US20190266485A1
公开(公告)日:2019-08-29
申请号:US16280960
申请日:2019-02-20
Inventor: Surinder Pal SINGH , Giuseppe DESOLI , Thomas BOESCH
Abstract: Embodiments of a device include an integrated circuit, a reconfigurable stream switch formed in the integrated circuit, and an arithmetic unit coupled to the reconfigurable stream switch. The arithmetic unit has a plurality of inputs and at least one output, and the arithmetic unit is solely dedicated to performance of a plurality of parallel operations. Each one of the plurality of parallel operations carries out a portion of the formula: output=AX+BY+C.
-
公开(公告)号:US20240330660A1
公开(公告)日:2024-10-03
申请号:US18426128
申请日:2024-01-29
Applicant: STMicroelectronics International N.V.
Inventor: Carmine CAPPETTA , Surinder Pal SINGH , Giuseppe DESOLI , Thomas BOESCH , Michele ROSSI
IPC: G06N3/0464 , G06N3/063
CPC classification number: G06N3/0464 , G06N3/063
Abstract: A neural network includes an internal storage unit. The internal storage unit stores feature data received from a memory external to the neural network. The internal storage unit reads the feature data to a hardware accelerator of the neural network. The internal storage unit adapts a storage pattern of the feature data and a read pattern of the feature data to enhance the efficiency of the hardware accelerator.
-
公开(公告)号:US20140062460A1
公开(公告)日:2014-03-06
申请号:US14078118
申请日:2013-11-12
Applicant: STMicroelectronics International N.V.
Inventor: Surinder Pal SINGH , Kaushik SAHA
IPC: G01R19/22
CPC classification number: H04L67/025 , G01R19/22 , G01R21/133 , H04L41/32
Abstract: A system for power measurement in an electronic device includes a sensing unit, an analog-to-digital converter (ADC) and a controller. The sensing unit senses voltage across a power source and modulates a carrier signal based on the sensed voltage. The ADC converts a combination of the modulated carrier signal and audio signals received by the electronic device to generate a digitized combined signal and provides the digitized combined signal to the controller. The controller separates digitized modulated carrier signal and digitized audio signals. The digitized modulated carrier signal is demodulated to generate an output signal that provides a measure of the power consumed by the electronic device.
Abstract translation: 电子设备中的功率测量系统包括感测单元,模数转换器(ADC)和控制器。 感测单元感测电源两端的电压,并根据检测到的电压调制载波信号。 ADC转换由电子设备接收的调制载波信号和音频信号的组合,以生成数字化的组合信号,并将数字化的组合信号提供给控制器。 控制器分离数字化调制载波信号和数字化音频信号。 数字化调制载波信号被解调以产生提供电子设备消耗的功率的量度的输出信号。
-
10.
公开(公告)号:US20250053807A1
公开(公告)日:2025-02-13
申请号:US18779807
申请日:2024-07-22
Inventor: Danilo Pietro PAU , Surinder Pal SINGH , Fabrizio Maria Aymone
Abstract: The present disclosure relates to a method of training a neural network using a circuit comprising a memory and a processing device, an exemplary method comprising: performing a first forward inference pass through the neural network based on input features to generate first activations, and generating an error based on a target value, and storing the error to the memory; and performing, for each layer of the neural network: a modulated forward inference pass; before, during or after the modulated forward inference pass, a second forward inference pass based on the input features to regenerate one or more first activations; and updating one or more weights in the neural network based on the modulated activations and the one or more regenerated first activations.
-
-
-
-
-
-
-
-
-