Generating Diverse and Natural Text-To-Speech Samples

    公开(公告)号:US20220246132A1

    公开(公告)日:2022-08-04

    申请号:US17163007

    申请日:2021-01-29

    Applicant: Google LLC

    Abstract: A method of generating diverse and natural text-to-speech (TTS) samples includes receiving a text and generating a speech sample based on the text using a TTS model. A training process trains the TTS model to generate the speech sample by receiving training samples. Each training sample includes a spectrogram and a training text corresponding to the spectrogram. For each training sample, the training process identifies speech units associated with the training text. For each speech unit, the training process generates a speech embedding, aligns the speech embedding with a portion of the spectrogram, extracts a latent feature from the aligned portion of the spectrogram, and assigns a quantized embedding to the latent feature. The training process generates the speech sample by decoding a concatenation of the speech embeddings and a quantized embeddings for the speech units associated with the training text corresponding to the spectrogram.

    Neural machine translation systems
    95.
    发明授权

    公开(公告)号:US11113480B2

    公开(公告)日:2021-09-07

    申请号:US16336870

    申请日:2017-09-25

    Applicant: GOOGLE LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural machine translation. One of the systems includes an encoder neural network comprising: an input forward long short-term memory (LSTM) layer configured to process each input token in the input sequence in a forward order to generate a respective forward representation of each input token, an input backward LSTM layer configured to process each input token in a backward order to generate a respective backward representation of each input token and a plurality of hidden LSTM layers configured to process a respective combined representation of each of the input tokens in the forward order to generate a respective encoded representation of each of the input tokens; and a decoder subsystem configured to receive the respective encoded representations and to process the encoded representations to generate an output sequence.

    MINIMUM WORD ERROR RATE TRAINING FOR ATTENTION-BASED SEQUENCE-TO-SEQUENCE MODELS

    公开(公告)号:US20200043483A1

    公开(公告)日:2020-02-06

    申请号:US16529252

    申请日:2019-08-01

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses N-best lists of decoded hypotheses, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.

    NEURAL MACHINE TRANSLATION SYSTEMS
    98.
    发明申请

    公开(公告)号:US20200034435A1

    公开(公告)日:2020-01-30

    申请号:US16336870

    申请日:2017-09-25

    Applicant: GOOGLE LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural machine translation. One of the systems includes an encoder neural network comprising: an input forward long short-term memory (LSTM) layer configured to process each input token in the input sequence in a forward order to generate a respective forward representation of each input token, an input backward LSTM layer configured to process each input token in a backward order to generate a respective backward representation of each input token and a plurality of hidden LSTM layers configured to process a respective combined representation of each of the input tokens in the forward order to generate a respective encoded representation of each of the input tokens; and a decoder subsystem configured to receive the respective encoded representations and to process the encoded representations to generate an output sequence.

Patent Agency Ranking