Abstract:
A method for processing floating point operations in a multi-processor system including a plurality of single processor cores is provided. In this method, upon receiving a group setting for performing an operation, the plurality of single processor cores are grouped into at least one group according to the group setting, and a single processor core set as a master in the group loads an instruction for performing the operation from an external memory, and performs parallel operations by utilizing floating point units (FUPs) of all single processor cores in the group according to the instructions.
Abstract:
A first communication device is provided. The first communication device modulates data to generate a first data symbol. The first communication device generates a first signal by using a first signal waveform allocated among a plurality of mutually orthogonal signal waveforms and the first data symbol. The first communication device outputs the first signal to a serial line connected to a second communication device.
Abstract:
Disclosed herein is an Artificial Intelligence (AI) processor. The AI processor includes multiple NVM AI cores for respectively performing basic unit operations required for a deep-learning operation based on data stored in NVM; SRAM for storing at least some of the results of the basic unit operations; and an AI core for performing an accumulation operation on the results of the basic unit operation.
Abstract:
A method for controlling a memory from which data is transferred to a neural network processor and an apparatus thereof are provided, the method including: generating prefetch information of data by using a blob descriptor and a reference prediction table after history information is input; reading the data in the memory based on the pre-fetch information and temporarily archiving read data in a prefetch buffer; and accessing next data in the memory based on the prefetch information and temporarily archiving the next data in the prefetch buffer after the data is transferred to the neural network from the prefetch buffer.
Abstract:
An embodiment of the present invention provides a quantization method for weights of a plurality of batch normalization layers, including: receiving a plurality of previously learned first weights of the plurality of batch normalization layers; obtaining first distribution information of the plurality of first weights; performing a first quantization on the plurality of first weights using the first distribution information to obtain a plurality of second weights; obtaining second distribution information of the plurality of second weights; and performing a second quantization on the plurality of second weights using the second distribution information to obtain a plurality of final weights, and thereby reducing an error that may occur when quantizing the weight of the batch normalization layer.
Abstract:
An apparatus and method for improving voice recognition are disclosed herein. The apparatus includes a standard voice transmission unit, a Mel-frequency cepstrum coefficient (MFCC) generation unit, and an MFCC compensation unit. The standard voice transmission unit generates a standard voice. The MFCC generation unit generates voice feature data (MFCC) based on the utterance of the standard voice before voice recognition. The MFCC compensation unit stores a gain value generated based on the standard voice, and compensates for the distortion of the voice feature data based on the utterance of a user using the gain value during the voice recognition.
Abstract:
A method and apparatus for multi-level stepwise quantization for neural network are provided. The apparatus sets a reference level by selecting a value from among values of parameters of the neural network in a direction from a high value equal to or greater than a predetermined value to a lower value, and performs learning based on the reference level. The setting of a reference level and the performing of learning are iteratively performed until the result of the reference level learning satisfies a predetermined value and there is no variable parameter that is updated during learning among the parameters.
Abstract:
Provided is an image compression device including an object extracting unit configured to perform convolution neural network (CNN) training and identify an object from an image received externally, a parameter adjusting unit configured to adjust a quantization parameter of a region in which the identified object is included in the image on the basis of the identified object, and an image compression unit configured to compress the image on the basis of the adjusted quantization parameter.
Abstract:
Disclosed herein are a method and an apparatus of expanding a speech recognition database used for speech recognition. The method of expanding a speech recognition database includes generating a pronunciation text from a corpus; confirming whether or not a non-registered word that is not registered in advance in a pronunciation dictionary among words included in the pronunciation text is present; generating lexical model information on the corresponding non-registered word with reference to a built-up acoustic model in the case in which the non-registered word is present as a confirmation result; and adding the generated lexical model information to a built-up lexical model. According to exemplary embodiments of the present invention, various speeches may be recognized in a stand-along speech recognizer in which an infrastructure is insufficient.
Abstract:
Disclosed herein is a noise cancellation apparatus and method, which select in advance parameters to be used for noise cancellation in a reference voice signal section by generating a reference voice signal in advance before a voice signal is generated, thus improving noise cancellation effects. The noise cancellation apparatus includes a parameter initialization unit for determining an initial value of a parameter to be used for noise cancellation, based on reference signals filtered for respective frequencies, a parameter estimation unit for receiving the initial value of the parameter, and estimating the parameter in response to signals that are input after being filtered for respective frequencies, a gain estimation unit for calculating gains for respective frequencies based on the parameter from the parameter estimation unit, and a gain application unit for cancelling noise by applying the gains to the signals that are input after being filtered for respective frequencies.