-
公开(公告)号:US20250077182A1
公开(公告)日:2025-03-06
申请号:US18953922
申请日:2024-11-20
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Jinook SONG , Daekyeung KIM , Junseok PARK , Joonho SONG , Sehwan LEE , Junwoo JANG , Yunkyo CHO
Abstract: An arithmetic apparatus includes a first operand holding circuit configured to output a first operand according to a clock signal, generate an indicator signal based on bit values of high-order bit data including a most significant bit of the first operand, and gate the clock signal based on the indicator signal, the clock signal being applied to a flip-flop latching the high-order bit data of the first operand; a second operand holding circuit configured to output a second operand according to the clock signal; and an arithmetic circuit configured to perform data gating on the high-order bit data of the first operand based on the indicator signal and output an operation result by performing an operation using a modified first operand resulting from the data gating and the second operand.
-
公开(公告)号:US20230252298A1
公开(公告)日:2023-08-10
申请号:US18304574
申请日:2023-04-21
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Joonho SONG , Sehwan LEE , Junwoo JANG
CPC classification number: G06N3/08 , G06F17/153 , G06N3/045 , G06N3/04
Abstract: A neural network apparatus configured to perform a deconvolution operation includes a memory configured to store a first kernel; and a processor configured to: obtain, from the memory, the first kernel; calculate a second kernel by adjusting an arrangement of matrix elements comprised in the first kernel; generate sub-kernels by dividing the second kernel; perform a convolution operation between an input feature map and the sub-kernels using a convolution operator; and generate an output feature map, as a deconvolution of the input feature map, by merging results of the convolution operation.
-
公开(公告)号:US20210365194A1
公开(公告)日:2021-11-25
申请号:US17394447
申请日:2021-08-05
Applicant: Samsung Electronics Co., Ltd.
Inventor: Joonho SONG
Abstract: A method of allocating a memory for driving a neural network including obtaining first capacity information of a space to store an input feature map of a first layer from among the layers of the neural network, and second capacity information of a space to store an output feature map of the first layer, and allocating a first storage space to store the input feature map in the memory based on an initial address value of the memory and the first capacity information and a second storage space to store the output feature map in the memory based on a last address value of the memory and the second capacity information.
-
公开(公告)号:US20210174179A1
公开(公告)日:2021-06-10
申请号:US16989391
申请日:2020-08-10
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Jinook SONG , Daekyeung KIM , Junseok PARK , Joonho SONG , Sehwan LEE , Junwoo JANG , Yunkyo CHO
Abstract: An arithmetic apparatus includes a first operand holding circuit configured to output a first operand according to a clock signal, generate an indicator signal based on bit values of high-order bit data including a most significant bit of the first operand, and gate the clock signal based on the indicator signal, the clock signal being applied to a flip-flop latching the high-order bit data of the first operand; a second operand holding circuit configured to output a second operand according to the clock signal; and an arithmetic circuit configured to perform data gating on the high-order bit data of the first operand based on the indicator signal and output an operation result by performing an operation using a modified first operand resulting from the data gating and the second operand.
-
公开(公告)号:US20190171930A1
公开(公告)日:2019-06-06
申请号:US16158660
申请日:2018-10-12
Applicant: Samsung Electronics Co., Ltd.
Inventor: Sehwan LEE , Namjoon KIM , Joonho SONG , Junwoo JANG
Abstract: Provided are a method and apparatus for processing a convolution operation in a neural network, the method includes determining operands from input feature maps and kernels, on which a convolution operation is to be performed, dispatching operand pairs combined from the determined operands to multipliers in a convolution operator, generating outputs by performing addition and accumulation operations with respect to results of multiplication operations, and obtaining pixel values of output feature maps corresponding to a result of the convolution operation based on the generated outputs.
-
公开(公告)号:US20190138898A1
公开(公告)日:2019-05-09
申请号:US16107717
申请日:2018-08-21
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Joonho SONG , Sehwan LEE , Junwoo JANG
Abstract: A neural network apparatus configured to perform a deconvolution operation includes a memory configured to store a first kernel; and a processor configured to: obtain, from the memory, the first kernel; calculate a second kernel by adjusting an arrangement of matrix elements comprised in the first kernel; generate sub-kernels by dividing the second kernel; perform a convolution operation between an input feature map and the sub-kernels using a convolution operator; and generate an output feature map, as a deconvolution of the input feature map, by merging results of the convolution operation.
-
公开(公告)号:US20220310194A1
公开(公告)日:2022-09-29
申请号:US17840722
申请日:2022-06-15
Applicant: Samsung Electronics Co., Ltd.
Inventor: Shinhaeng KANG , Joonho SONG , Seungwon LEE
IPC: G11C29/00 , G06F11/20 , G06F12/0815 , G11C29/38 , H01L25/065 , H01L25/18 , G01R31/3193
Abstract: A three-dimensional stacked memory device includes a buffer die having a plurality of core die memories stacked thereon. The buffer die is configured as a buffer to occupy a first space in the buffer die. The first memory module, disposed in a second space unoccupied by the buffer, is configured to operate as a cache of the core die memories. The controller is configured to detect a fault in a memory area corresponding to a cache line in the core die memories based on a result of a comparison between data stored in the cache line and data stored in the memory area corresponding to the cache line in the core die memories. The second memory module, disposed in a third space unoccupied by the buffer and the first memory module, is configured to replace the memory area when the fault is detected in the memory area.
-
公开(公告)号:US20220179714A1
公开(公告)日:2022-06-09
申请号:US17467890
申请日:2021-09-07
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Hanwoong JUNG , Joonho SONG , Seungwon LEE
Abstract: A method and apparatus for scheduling a neural network operation. The method includes receiving data on a layer of a neural network, generating partitions to be assigned to cores by dividing the data, generating tiles by dividing the partitions, and scheduling an operation order of the tiles based on whether the data are shared between the cores.
-
公开(公告)号:US20210117791A1
公开(公告)日:2021-04-22
申请号:US17112041
申请日:2020-12-04
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Joonho SONG , Sehwan LEE , Junwoo JANG
Abstract: A neural network apparatus configured to perform a deconvolution operation includes a memory configured to store a first kernel; and a processor configured to: obtain, from the memory, the first kernel; calculate a second kernel by adjusting an arrangement of matrix elements comprised in the first kernel; generate sub-kernels by dividing the second kernel; perform a convolution operation between an input feature map and the sub-kernels using a convolution operator; and generate an output feature map, as a deconvolution of the input feature map, by merging results of the convolution operation.
-
公开(公告)号:US20200210296A1
公开(公告)日:2020-07-02
申请号:US16456094
申请日:2019-06-28
Applicant: Samsung Electronics Co., Ltd.
Inventor: Shinhaeng KANG , Joonho SONG , Seungwon LEE
IPC: G06F11/20 , G06F12/0815 , G11C29/38 , H01L25/065 , H01L25/18
Abstract: A three-dimensional stacked memory device includes a buffer die having a plurality of core die memories stacked thereon. The buffer die is configured as a buffer to occupy a first space in the buffer die. The first memory module, disposed in a second space unoccupied by the buffer, is configured to operate as a cache of the core die memories. The controller is configured to detect a fault in a memory area corresponding to a cache line in the core die memories based on a result of a comparison between data stored in the cache line and data stored in the memory area corresponding to the cache line in the core die memories. The second memory module, disposed in a third space unoccupied by the buffer and the first memory module, is configured to replace the memory area when the fault is detected in the memory area.
-
-
-
-
-
-
-
-
-