Abstract:
A method and apparatus for training a language model, include generating a first training feature vector sequence and a second training feature vector sequence from training data. The method is configured to perform forward estimation of a neural network based on the first training feature vector sequence, and perform backward estimation of the neural network based on the second training feature vector sequence. The method is further configured to train a language model based on a result of the forward estimation and a result of the backward estimation.
Abstract:
Methods and apparatuses for determining a domain of a sentence are disclosed. The apparatus may generate, using an autoencoder, an embedded feature from an input feature indicating an input sentence, and determine a domain of the input sentence based on a location of the embedded feature in an embedding space where embedded features are distributed.
Abstract:
A training method and apparatus for speech recognition is disclosed, where an example of the training method includes determining whether a current iteration for training a neural network is performed by an experience replay iteration using an experience replay set, selecting a sample from at least one of the experience replay set and a training set based on a result of the determining, and training the neural network based on the selected sample.