Abstract:
A natural language generation method and apparatus are provided. The natural language generation apparatus converts an input sentence to a first vector using a first neural network model-based encoder, determines whether a control word is to be provided based on a criterion, and converts the first vector to an output sentence using a neural network model-based decoder, based on whether the control word is to be provided.
Abstract:
A neural network method and apparatus, the method including providing a voice signal to a main neural network and a sub-neural network, obtaining a scaling factor by implementing the sub-neural network configured to generate the scaling factor by interpreting the provided voice signal, determining a size of a future context, based on the scaling factor, to be considered by the main neural network configured to perform speech recognition, obtaining a result of a recognizing of the voice signal by implementing the main neural network with the determined size of the future context.
Abstract:
Provided is an automated interpretation method, apparatus, and system. The automated interpretation method includes encoding a voice signal in a first language to generate a first feature vector, decoding the first feature vector to generate a first language sentence in the first language, encoding the first language sentence to generate a second feature vector with respect to a second language, decoding the second feature vector to generate a second language sentence in the second language, controlling a generating of a candidate sentence list based on any one or any combination of the first feature vector, the first language sentence, the second feature vector, and the second language sentence, and selecting, from the candidate sentence list, a final second language sentence as a translation of the voice signal.
Abstract:
A method and apparatus for training a language model, include generating a first training feature vector sequence and a second training feature vector sequence from training data. The method is configured to perform forward estimation of a neural network based on the first training feature vector sequence, and perform backward estimation of the neural network based on the second training feature vector sequence. The method is further configured to train a language model based on a result of the forward estimation and a result of the backward estimation.
Abstract:
A method and apparatus for speech recognition and for generation of speech recognition engine, and a speech recognition engine are provided. The method of speech recognition involves receiving a speech input, transmitting the speech input to a speech recognition engine, and receiving a speech recognition result from the speech recognition engine, in which the speech recognition engine obtains a phoneme sequence from the speech input and provides the speech recognition result based on a phonetic distance of the phoneme sequence.
Abstract:
An information search apparatus includes an event receiver configured to receive an event from an application interface, wherein the application interface is presented on a display of the information search apparatus; a query extractor configured to, in response to receiving the event, extract a query from content displayed within the application interface; an information searcher configured to search for information from an information source using the query; and an information layer display configured to display an information layer within the application interface such that the information layer overlies existing objects on the application interface, wherein the information layer includes an information item which indicates information found by the search for information.
Abstract:
A speech recognition apparatus includes a probability calculator configured to calculate phoneme probabilities of an audio signal using an acoustic model; a candidate set extractor configured to extract a candidate set from a recognition target list; and a result returner configured to return a recognition result of the audio signal based on the calculated phoneme probabilities and the extracted candidate set.
Abstract:
An electronic device and an method of the electronic device are provided, where the electronic device maintains a context that does not reflect a request for a secret conversation, in response to the request for the secret conversation being received from a first user, and generates a response signal to a voice signal of a second user based on the maintained context, in response to an end of the secret conversation with the first user.
Abstract:
A user adaptive speech recognition method and apparatus are provided. A speech recognition method includes extracting an identity vector representing an individual characteristic of a user from speech data, implementing a sub-neural network by inputting a sub-input vector including at least the identity vector to the sub-neural network, determining a scaling factor based on a result of the implementing of the sub-neural network, implementing a main neural network, configured to perform a recognition operation, by applying the determined scaling factor to the main neural network and inputting the speech data to the main neural network to which the determined scaling factor is applied, and indicating a recognition result of the implementation of the main neural network.
Abstract:
Provided are a method and an apparatus for speech recognition, and a method and an apparatus for training transformation parameter. A speech recognition apparatus includes an acoustic score calculator configured to use an acoustic model to calculate an acoustic score of a speech input, an acoustic score transformer configured to transform the calculated acoustic score into an acoustic score corresponding to standard pronunciation by using a transformation parameter, and a decoder configured to decode the transformed acoustic score to output a recognition result of the speech input.