Abstract:
An apparatus and a method for constructing a multilingual acoustic model, and a computer readable recording medium are provided. The method for constructing a multilingual acoustic model includes dividing an input feature into a common language portion and a distinctive language portion, acquiring a tandem feature by training the divided common language portion and distinctive language portion using a neural network to estimate and remove correlation between phonemes, dividing parameters of an initial acoustic model constructed using the tandem feature into common language parameters and distinctive language parameters, adapting the common language parameters using data of a training language, adapting the distinctive language parameters using data of a target language, and constructing an acoustic model for the target language using the adapted common language parameters and the adapted distinctive language parameters.
Abstract:
A speech recognition method and a speech recognition apparatus which pre-download a speech recognition model predicted to be used and use the speech recognition model in speech recognition is provided. The speech recognition method, performed by the speech recognition apparatus, includes determining a speech recognition model, based on user information downloading the speech recognition model, performing speech recognition, based on the speech recognition model, and outputting a result of performing the speech recognition.
Abstract:
A method of translating a first language-based speech signal into a second language is provided. The method includes receiving the first language-based speech signal, converting the first language-based speech signal into a first language-based text including non-verbal information, by performing voice recognition on the first language-based speech signal, and translating the first language-based text into the second language, based on the non-verbal information.
Abstract:
Provided herein is a voice recognition server and a control method thereof, the method including determining an index value for each of a plurality of training texts; setting a group for each of the plurality of training texts based on the index values of the plurality of training texts, and matching a function corresponding to each group and storing the matched results; in response to receiving a user's uttered voice from a user terminal apparatus, determining an index value from the received uttered voice; and searching a group corresponding to the index value determined from the received uttered voice, and performing the function corresponding to the uttered voice, thereby providing a voice recognition result of a variety of user's uttered voices suitable to the user's intentions.
Abstract:
A speech signal processing method of a user terminal includes: receiving a speech signal, detecting a personalized information section including personal information in the speech signal, performing data processing on the personalized information section of the speech signal by using a personalized model generated based on the personal information, and receiving, from a server, a result of the data processing performed by the server on a general information section of the speech signal that is different than the personalized information section of the speech signal.
Abstract:
A method of recognizing a speech and an electronic device thereof are provided. The method includes: segmenting a speech signal into a plurality of sections at preset time intervals; performing a phoneme recognition with respect to one of the plurality of sections of the speech signal by using a first acoustic model; extracting a candidate word of the one of the plurality of sections of the speech signal by using the phoneme recognition result; and performing a speech recognition with respect to the one the plurality of sections the speech signal by using the candidate word.
Abstract:
A method of updating a grammar model used during speech recognition includes obtaining a corpus including at least one word, obtaining the at least one word from the corpus, splitting the at least one obtained word into at least one segment, generating a hint for recombining the at least one segment into the at least one word, and updating the grammar model by using at least one segment comprising the hint.
Abstract:
A method of converting a feature vector includes extracting a feature sequence from an audio signal including utterance of a user; extracting a feature vector from the feature sequence; acquiring a conversion matrix for reducing a dimension of the feature vector, based on a probability value acquired based on different covariance values; and converting the feature vector by using the conversion matrix.
Abstract:
A method of updating speech recognition data including a language model used for speech recognition, the method including obtaining language data including at least one word; detecting a word that does not exist in the language model from among the at least one word; obtaining at least one phoneme sequence regarding the detected word; obtaining components constituting the at least one phoneme sequence by dividing the at least one phoneme sequence into predetermined unit components; determining information regarding probabilities that the respective components constituting each of the at least one phoneme sequence appear during speech recognition; and updating the language model based on the determined probability information.
Abstract:
A method of updating a grammar model used during speech recognition includes obtaining a corpus including at least one word, obtaining the at least one word from the corpus, splitting the at least one obtained word into at least one segment, generating a hint for recombining the at least one segment into the at least one word, and updating the grammar model by using at least one segment comprising the hint.