-
1.
公开(公告)号:US20250078830A1
公开(公告)日:2025-03-06
申请号:US18826743
申请日:2024-09-06
Applicant: Google LLC
Inventor: Junwen Bai , Bo Li , Qiujia Li , Tara N. Sainath , Trevor Strohman
IPC: G10L15/197 , G10L15/00 , G10L15/02 , G10L15/06 , G10L15/30
Abstract: A method includes receiving a sequence of acoustic frames characterizing a spoken utterance in a particular native language. The method also includes generating a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames by a causal encoder that includes an initial stack of multi-head attention layers. The method also includes generating a second higher order feature representation for a corresponding first higher order feature representation by a non-causal encoder that includes a final stack of multi-head attention layers. The method also includes receiving, as input at each corresponding language-dependent adapter (LDA) module, a language ID vector identifying the particular native language to activate corresponding language-dependent weights specific to the particular native language. The method also includes generating a first probability distribution over possible speech recognition hypotheses by a decoder.
-
公开(公告)号:US12249317B2
公开(公告)日:2025-03-11
申请号:US17929934
申请日:2022-09-06
Applicant: Google LLC
Inventor: Bo Li , Junwen Bai , Yu Zhang , Ankur Bapna , Nikhil Siddhartha , Khe Chai Sim , Tara N. Sainath
IPC: G10L15/16 , G10L15/02 , G10L15/06 , G10L15/187 , G10L15/19
Abstract: A method includes receiving audio features and generating a latent speech representation based on the audio features. The method also includes generating a target quantized vector token and a target token index for a corresponding latent speech representation. The method also includes generating a contrastive context vector for a corresponding unmasked or masked latent speech representation and deriving a contrastive self-supervised loss based on the corresponding contrastive context vector and the corresponding target quantized vector token. The method also include generating a high-level context vector based on the contrastive context vector and, for each high-level context vector, learning to predict the target token index at the corresponding time step using a cross-entropy loss based on the target token index. The method also includes predicting speech recognition hypotheses for the utterance and training a multilingual automatic speech recognition (ASR) model using an unsupervised loss and a supervised loss.
-