-
公开(公告)号:US11449720B2
公开(公告)日:2022-09-20
申请号:US16870412
申请日:2020-05-08
Inventor: Ju-Yeob Kim , Byung Jo Kim , Seong Min Kim , Jin Kyu Kim , Ki Hyuk Park , Mi Young Lee , Joo Hyun Lee , Young-deuk Jeon , Min-Hyung Cho
IPC: G06K9/62
Abstract: Provided is an image recognition device. The image recognition device includes a frame data change detector that sequentially receives a plurality of frame data and detects a difference between two consecutive frame data, an ensemble section controller that sets an ensemble section in the plurality of frame data, based on the detected difference, an image recognizer that sequentially identifies classes respectively corresponding to a plurality of section frame data by applying different neural network classifiers to the plurality of section frame data in the ensemble section, and a recognition result classifier that sequentially identifies ensemble classes respectively corresponding to the plurality of section frame data by combining the classes in the ensemble section.
-
公开(公告)号:US11488003B2
公开(公告)日:2022-11-01
申请号:US16409437
申请日:2019-05-10
Inventor: Ju-Yeob Kim , Byung Jo Kim , Seong Min Kim , Jin Kyu Kim , Mi Young Lee , Joo Hyun Lee
Abstract: An artificial neural network apparatus and an operating method including a plurality of layer processors for performing operations on input data are disclosed. The artificial neural network apparatus may include: a flag layer processor for outputting a flag according to a comparison result between a pooling output value of a current frame and a pooling output value of a previous frame; and a controller for stopping operation of a layer processor which performs operations after the flag layer processor among the plurality of layer processors when the flag is outputted from the flag layer processor, wherein the flag layer processor is a layer processor that performs a pooling operation first among the plurality of layer processors.
-
公开(公告)号:US10319115B2
公开(公告)日:2019-06-11
申请号:US15698499
申请日:2017-09-07
Inventor: Seong Mo Park , Sung Eun Kim , Ju-Yeob Kim , Jin Kyu Kim , Kwang Il Oh , Joo Hyun Lee
Abstract: Provided is an image compression device including an object extracting unit configured to perform convolution neural network (CNN) training and identify an object from an image received externally, a parameter adjusting unit configured to adjust a quantization parameter of a region in which the identified object is included in the image on the basis of the identified object, and an image compression unit configured to compress the image on the basis of the adjusted quantization parameter.
-
公开(公告)号:US09735996B2
公开(公告)日:2017-08-15
申请号:US15348771
申请日:2016-11-10
Inventor: Jin Kyu Kim
CPC classification number: H04L27/265 , G06F9/3001 , G06F17/141 , G06F17/142
Abstract: Provided is a fully parallel fast Fourier transformer of N-point, where N is a natural number, including a bit-reversal arranging block configured to rearrange an order of N input complex number samples, a plurality of first processors configured to perform, in a plurality of group units, a 16-point FFT on the rearranged complex number samples, a twiddle factor multiplier configured to multiply outputs of the plurality of first processors by twiddle factors, a first group rearranging block configured to rearrange outputs of the twiddle factor multiplier in the plurality of group units, a plurality of second processors configured to perform, in the plurality of group units, 16-point FFT on the complex number samples grouped by the first group rearranging block, and a second group rearranging block configured to rearrange outputs of the plurality of second processors to output under a same arrangement criterion as the first group rearranging block.
-
公开(公告)号:US11455539B2
公开(公告)日:2022-09-27
申请号:US16541275
申请日:2019-08-15
Inventor: Mi Young Lee , Byung Jo Kim , Seong Min Kim , Ju-Yeob Kim , Jin Kyu Kim , Joo Hyun Lee
Abstract: An embodiment of the present invention provides a quantization method for weights of a plurality of batch normalization layers, including: receiving a plurality of previously learned first weights of the plurality of batch normalization layers; obtaining first distribution information of the plurality of first weights; performing a first quantization on the plurality of first weights using the first distribution information to obtain a plurality of second weights; obtaining second distribution information of the plurality of second weights; and performing a second quantization on the plurality of second weights using the second distribution information to obtain a plurality of final weights, and thereby reducing an error that may occur when quantizing the weight of the batch normalization layer.
-
公开(公告)号:US11003985B2
公开(公告)日:2021-05-11
申请号:US15806111
申请日:2017-11-07
Inventor: Jin Kyu Kim , Byung Jo Kim , Seong Min Kim , Ju-Yeob Kim , Mi Young Lee , Joo Hyun Lee
Abstract: Provided is a convolutional neural network system including a data selector configured to output an input value corresponding to a position of a sparse weight from among input values of input data on a basis of a sparse index indicating the position of a nonzero value in a sparse weight kernel, and a multiply-accumulate (MAC) computator configured to perform a convolution computation on the input value output from the data selector by using the sparse weight kernel.
-
公开(公告)号:US11217299B2
公开(公告)日:2022-01-04
申请号:US16997445
申请日:2020-08-19
Inventor: Young-deuk Jeon , Seong Min Kim , Jin Kyu Kim , Joo Hyun Lee , Min-Hyung Cho , Jin Ho Han
IPC: G11C11/40 , G11C11/4076 , G11C11/4096 , G11C11/4099
Abstract: Disclosed are a device and a method for calibrating a reference voltage. The reference voltage calibrating device includes a data signal communication unit that transmits/receives a data signal, a data strobe signal receiving unit that receives a first data strobe signal and a second data strobe signal, a voltage level of the second data strobe signal being opposite to a voltage level of the first data strobe signal, and a reference voltage generating unit that sets a reference voltage for determining a data value of the data signal, based on the first data strobe signal and the second data strobe signal, and the reference voltage generating unit adjusts the reference voltage based on the first data strobe signal and the second data strobe signal.
-
公开(公告)号:US11494630B2
公开(公告)日:2022-11-08
申请号:US16742808
申请日:2020-01-14
Inventor: Young-deuk Jeon , Byung Jo Kim , Ju-Yeob Kim , Jin Kyu Kim , Ki Hyuk Park , Mi Young Lee , Joo Hyun Lee , Min-Hyung Cho
Abstract: The neuromorphic arithmetic device comprises an input monitoring circuit that outputs a monitoring result by monitoring that first bits of at least one first digit of a plurality of feature data and a plurality of weight data are all zeros, a partial sum data generator that skips an arithmetic operation that generates a first partial sum data corresponding to the first bits of a plurality of partial sum data in response to the monitoring result while performing the arithmetic operation of generating the plurality of partial sum data, based on the plurality of feature data and the plurality of weight data, and a shift adder that generates the first partial sum data with a zero value and result data, based on second partial sum data except for the first partial sum data among the plurality of partial sum data and the first partial sum data generated with the zero value.
-
公开(公告)号:US11204876B2
公开(公告)日:2021-12-21
申请号:US16953242
申请日:2020-11-19
Inventor: Byung Jo Kim , Joo Hyun Lee , Seong Min Kim , Ju-Yeob Kim , Jin Kyu Kim , Mi Young Lee
IPC: G06F12/08 , G06F12/0862 , G06N3/063 , G06F13/16 , G06F12/02
Abstract: A method for controlling a memory from which data is transferred to a neural network processor and an apparatus thereof are provided, the method including: generating prefetch information of data by using a blob descriptor and a reference prediction table after history information is input; reading the data in the memory based on the pre-fetch information and temporarily archiving read data in a prefetch buffer; and accessing next data in the memory based on the prefetch information and temporarily archiving the next data in the prefetch buffer after the data is transferred to the neural network from the prefetch buffer.
-
-
-
-
-
-
-
-