-
公开(公告)号:US10909418B2
公开(公告)日:2021-02-02
申请号:US16884232
申请日:2020-05-27
Inventor: Sehwan Lee , Leesup Kim , Hyeonuk Kim , Jaehyeong Sim , Yeongjae Choi
Abstract: A processor-implemented neural network method includes: obtaining, from a memory, data of an input feature map and kernels having a binary-weight, wherein the kernels are to be processed in a layer of a neural network; decomposing each of the kernels into a first type sub-kernel reconstructed with weights of a same sign, and a second type sub-kernel for correcting a difference between a respective kernel, among the kernels, and the first type sub-kernel; performing a convolution operation by using the input feature map and the first type sub-kernels and the second type sub-kernels decomposed from each of the kernels; and obtaining an output feature map by combining results of the convolution operation.
-
公开(公告)号:US10699160B2
公开(公告)日:2020-06-30
申请号:US16110664
申请日:2018-08-23
Inventor: Sehwan Lee , Leesup Kim , Hyeonuk Kim , Jaehyeong Sim , Yeongjae Choi
Abstract: A processor-implemented neural network method includes: obtaining, from a memory, data an input feature map and kernels having a binary-weight, wherein the kernels are to be processed in a layer of a neural network; decomposing each of the kernels into a first type sub-kernel reconstructed with weights of a same sign, and a second type sub-kernel for correcting a difference between a respective kernel, among the kernels, and the first type sub-kernel; performing a convolution operation by using the input feature map and the first type sub-kernels and the second type sub-kernels decomposed from each of the kernels; and obtaining an output feature map by combining results of the convolution operation.
-
公开(公告)号:US20200026979A1
公开(公告)日:2020-01-23
申请号:US16552850
申请日:2019-08-27
Applicant: Samsung Electronics Co., Ltd.
Inventor: Ilia Ovsiannikov , Ali Shafiee Ardestani , Joseph H. Hassoun , Lei Wang , Sehwan Lee , JoonHo Song , Jun-Woo Jang , Yibing Michelle Wang , Yuecheng Li
Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
-
公开(公告)号:US12271809B2
公开(公告)日:2025-04-08
申请号:US17848007
申请日:2022-06-23
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Namjoon Kim , Sehwan Lee , Junwoo Jang
Abstract: A neural network apparatus includes a plurality of node buffers connected to a node lane and configured to store input node data by a predetermined bit size; a plurality of weight buffers connected to a weight lane and configured to store weights; and one or more processors configured to: generate first and second split data by splitting the input node data by the predetermined bit size, store the first and second split data in the node buffers, output the first split data to an operation circuit for a neural network operation on an index-by-index basis, shift the second split data, and output the second split data to the operation circuit on the index-by-index basis.
-
公开(公告)号:US11875251B2
公开(公告)日:2024-01-16
申请号:US16244644
申请日:2019-01-10
Applicant: Samsung Electronics Co., Ltd.
Inventor: Junhaeng Lee , Hyunsun Park , Sehwan Lee , Seungwon Lee
IPC: G06N3/08 , G06N3/0495
CPC classification number: G06N3/08 , G06N3/0495
Abstract: A neural network method and apparatus is provided. A processor-implemented neural network method includes determining, based on a determined number of classes of input data, a precision for a neural network layer outputting an operation result, and processing parameters of the layer according to the determined precision.
-
公开(公告)号:US11783161B2
公开(公告)日:2023-10-10
申请号:US16446610
申请日:2019-06-19
Applicant: Samsung Electronics Co., Ltd.
Inventor: Ilia Ovsiannikov , Ali Shafiee Ardestani , Joseph H. Hassoun , Lei Wang , Sehwan Lee , JoonHo Song , Jun-Woo Jang , Yibing Michelle Wang , Yuecheng Li
CPC classification number: G06N3/04 , G06F17/153 , G06F17/16 , G06N3/08 , G06T9/002 , G06F9/3001
Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
-
公开(公告)号:US11775802B2
公开(公告)日:2023-10-03
申请号:US16552945
申请日:2019-08-27
Applicant: Samsung Electronics Co., Ltd.
Inventor: Ilia Ovsiannikov , Ali Shafiee Ardestani , Joseph H. Hassoun , Lei Wang , Sehwan Lee , JoonHo Song , Jun-Woo Jang , Yibing Michelle Wang , Yuecheng Li
CPC classification number: G06N3/04 , G06F17/153 , G06F17/16 , G06N3/08 , G06T9/002 , G06F9/3001
Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
-
公开(公告)号:US11775801B2
公开(公告)日:2023-10-03
申请号:US16552850
申请日:2019-08-27
Applicant: Samsung Electronics Co., Ltd.
Inventor: Ilia Ovsiannikov , Ali Shafiee Ardestani , Joseph H. Hassoun , Lei Wang , Sehwan Lee , JoonHo Song , Jun-Woo Jang , Yibing Michelle Wang , Yuecheng Li
CPC classification number: G06N3/04 , G06F17/153 , G06F17/16 , G06N3/08 , G06T9/002 , G06F9/3001
Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
-
29.
公开(公告)号:US11423251B2
公开(公告)日:2022-08-23
申请号:US16733314
申请日:2020-01-03
Applicant: Samsung Electronics Co., Ltd.
Inventor: Dinesh Kumar Yadav , Ankur Deshwal , Saptarsi Das , Junwoo Jang , Sehwan Lee
Abstract: A method of performing convolution in a neural network with variable dilation rate is provided. The method includes receiving a size of a first kernel and a dilation rate, determining at least one of size of one or more disintegrated kernels based on the size of the first kernel, a baseline architecture of a memory and the dilation rate, determining an address of one or more blocks of an input image based on the dilation rate, and one or more parameters associated with a size of the input image and the memory. Thereafter, the one or more blocks of the input image and the one or more disintegrated kernels are fetched from the memory, and an output image is obtained based on convolution of each of the one or more disintegrated kernels and the one or more blocks of the input image.
-
公开(公告)号:US10885433B2
公开(公告)日:2021-01-05
申请号:US16107717
申请日:2018-08-21
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Joonho Song , Sehwan Lee , Junwoo Jang
Abstract: A neural network apparatus configured to perform a deconvolution operation includes a memory configured to store a first kernel; and a processor configured to: obtain, from the memory, the first kernel; calculate a second kernel by adjusting an arrangement of matrix elements comprised in the first kernel; generate sub-kernels by dividing the second kernel; perform a convolution operation between an input feature map and the sub-kernels using a convolution operator; and generate an output feature map, as a deconvolution of the input feature map, by merging results of the convolution operation.
-
-
-
-
-
-
-
-
-