-
公开(公告)号:US20220310097A1
公开(公告)日:2022-09-29
申请号:US17644377
申请日:2021-12-15
Applicant: Google LLC
Inventor: Jaeyoung Kim , Han Lu , Anshuman Tripathi , Qian Zhang , Hasim Sak
Abstract: A streaming speech recognition model includes an audio encoder configured to receive a sequence of acoustic frames and generate a higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The streaming speech recognition model also includes a label encoder configured to receive a sequence of non-blank symbols output by a final softmax layer and generate a dense representation. The streaming speech recognition model also includes a joint network configured to receive the higher order feature representation generated by the audio encoder and the dense representation generated by the label encoder and generate a probability distribution over possible speech recognition hypotheses. Here, the streaming speech recognition model is trained using self-alignment to reduce prediction delay by encouraging an alignment path that is one frame left from a reference forced-alignment frame.
-
2.
公开(公告)号:US11741947B2
公开(公告)日:2023-08-29
申请号:US17210465
申请日:2021-03-23
Applicant: Google LLC
Inventor: Anshuman Tripathi , Hasim Sak , Han Lu , Qian Zhang , Jaeyoung Kim
CPC classification number: G10L15/16 , G06N3/04 , G06N3/088 , G10L15/063 , G10L15/197 , G10L15/22 , G10L15/30
Abstract: A transformer-transducer model for unifying streaming and non-streaming speech recognition includes an audio encoder, a label encoder, and a joint network. The audio encoder receives a sequence of acoustic frames, and generates, at each of a plurality of time steps, a higher order feature representation for a corresponding acoustic frame. The label encoder receives a sequence of non-blank symbols output by a final softmax layer, and generates, at each of the plurality of time steps, a dense representation. The joint network receives the higher order feature representation and the dense representation at each of the plurality of time steps, and generates a probability distribution over possible speech recognition hypothesis. The audio encoder of the model further includes a neural network having an initial stack of transformer layers trained with zero look ahead audio context, and a final stack of transformer layers trained with a variable look ahead audio context.
-
公开(公告)号:US12254869B2
公开(公告)日:2025-03-18
申请号:US18357225
申请日:2023-07-24
Applicant: Google LLC
Inventor: Anshuman Tripathi , Hasim Sak , Han Lu , Qian Zhang , Jaeyoung Kim
Abstract: A transformer-transducer model for unifying streaming and non-streaming speech recognition includes an audio encoder, a label encoder, and a joint network. The audio encoder receives a sequence of acoustic frames, and generates, at each of a plurality of time steps, a higher order feature representation for a corresponding acoustic frame. The label encoder receives a sequence of non-blank symbols output by a final softmax layer, and generates, at each of the plurality of time steps, a dense representation. The joint network receives the higher order feature representation and the dense representation at each of the plurality of time steps, and generates a probability distribution over possible speech recognition hypothesis. The audio encoder of the model further includes a neural network having an initial stack of transformer layers trained with zero look ahead audio context, and a final stack of transformer layers trained with a variable look ahead audio context.
-
公开(公告)号:US12057124B2
公开(公告)日:2024-08-06
申请号:US17644377
申请日:2021-12-15
Applicant: Google LLC
Inventor: Jaeyoung Kim , Han Lu , Anshuman Tripathi , Qian Zhang , Hasim Sak
Abstract: A streaming speech recognition model includes an audio encoder configured to receive a sequence of acoustic frames and generate a higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The streaming speech recognition model also includes a label encoder configured to receive a sequence of non-blank symbols output by a final softmax layer and generate a dense representation. The streaming speech recognition model also includes a joint network configured to receive the higher order feature representation generated by the audio encoder and the dense representation generated by the label encoder and generate a probability distribution over possible speech recognition hypotheses. Here, the streaming speech recognition model is trained using self-alignment to reduce prediction delay by encouraging an alignment path that is one frame left from a reference forced-alignment frame.
-
公开(公告)号:US20240242712A1
公开(公告)日:2024-07-18
申请号:US18619684
申请日:2024-03-28
Applicant: Google LLC
Inventor: Jaeyoung Kim , Soheil Khorram , Hasim Sak , Anshuman Tripathi , Han Lu , Qian Zhang
CPC classification number: G10L15/16 , G06N3/088 , G10L15/1815
Abstract: A method includes receiving a plurality of unlabeled audio samples corresponding to spoken utterances not paired with corresponding transcriptions. At a target branch of a contrastive Siamese network, the method also includes generating a sequence of encoder outputs for the plurality of unlabeled audio samples and modifying time characteristics of the encoder outputs to generate a sequence of target branch outputs. At an augmentation branch of a contrastive Siamese network, the method also includes performing augmentation on the unlabeled audio samples, generating a sequence of augmented encoder outputs for the augmented unlabeled audio samples, and generating predictions of the sequence of target branch outputs generated at the target branch. The method also includes determining an unsupervised loss term based on target branch outputs and predictions of the sequence of target branch outputs. The method also includes updating parameters of the audio encoder based on the unsupervised loss term.
-
公开(公告)号:US20240177706A1
公开(公告)日:2024-05-30
申请号:US18515212
申请日:2023-11-20
Applicant: Google LLC
Inventor: Anshuman Tripathi , Soheil Khorram , Hasim Sak , Han Lu , Jaeyoung Kim , Qian Zhang
IPC: G10L15/06 , G10L15/065 , G10L15/10
CPC classification number: G10L15/063 , G10L15/065 , G10L15/10 , G10L2015/0635
Abstract: A method for training a sequence transduction model includes receiving a sequence of unlabeled input features extracted from unlabeled input samples. Using a teacher branch of an unsupervised subnetwork, the method includes processing the sequence of input features to predict probability distributions over possible teacher branch output labels, sampling one or more sequences of teacher branch output labels, and determining a sequence of pseudo output labels based on the one or more sequences of teacher branch output labels. Using a student branch that includes a student encoder of the unsupervised subnetwork, the method includes processing the sequence of input 10 features to predict probability distributions over possible student branch output labels, determining a negative log likelihood term based on the predicted probability distributions over possible student branch output labels and the sequence of pseudo output labels, and updating parameters of the student encoder.
-
公开(公告)号:US20230096805A1
公开(公告)日:2023-03-30
申请号:US17644337
申请日:2021-12-14
Applicant: Google LLC
Inventor: Jaeyoung Kim , Soheil Khorram , Hasim Sak , Anshuman Tripathi , Han Lu , Qian Zhang
Abstract: A method includes receiving a plurality of unlabeled audio samples corresponding to spoken utterances not paired with corresponding transcriptions. At a target branch of a contrastive Siamese network, the method also includes generating a sequence of encoder outputs for the plurality of unlabeled audio samples and modifying time characteristics of the encoder outputs to generate a sequence of target branch outputs. At an augmentation branch of a contrastive Siamese network, the method also includes performing augmentation on the unlabeled audio samples, generating a sequence of augmented encoder outputs for the augmented unlabeled audio samples, and generating predictions of the sequence of target branch outputs generated at the target branch. The method also includes determining an unsupervised loss term based on target branch outputs and predictions of the sequence of target branch outputs. The method also includes updating parameters of the audio encoder based on the unsupervised loss term.
-
公开(公告)号:US20240371379A1
公开(公告)日:2024-11-07
申请号:US18775561
申请日:2024-07-17
Applicant: Google LLC
Inventor: Jaeyoung Kim , Han Lu , Anshuman Tripathi , Qian Zhang , Hasim Sak
Abstract: A streaming speech recognition model includes an audio encoder configured to receive a sequence of acoustic frames and generate a higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The streaming speech recognition model also includes a label encoder configured to receive a sequence of non-blank symbols output by a final softmax layer and generate a dense representation. The streaming speech recognition model also includes a joint network configured to receive the higher order feature representation generated by the audio encoder and the dense representation generated by the label encoder and generate a probability distribution over possible speech recognition hypotheses. Here, the streaming speech recognition model is trained using self-alignment to reduce prediction delay by encouraging an alignment path that is one frame left from a reference forced-alignment frame.
-
公开(公告)号:US11961515B2
公开(公告)日:2024-04-16
申请号:US17644337
申请日:2021-12-14
Applicant: Google LLC
Inventor: Jaeyoung Kim , Soheil Khorram , Hasim Sak , Anshuman Tripathi , Han Lu , Qian Zhang
CPC classification number: G10L15/16 , G06N3/088 , G10L15/1815
Abstract: A method includes receiving a plurality of unlabeled audio samples corresponding to spoken utterances not paired with corresponding transcriptions. At a target branch of a contrastive Siamese network, the method also includes generating a sequence of encoder outputs for the plurality of unlabeled audio samples and modifying time characteristics of the encoder outputs to generate a sequence of target branch outputs. At an augmentation branch of a contrastive Siamese network, the method also includes performing augmentation on the unlabeled audio samples, generating a sequence of augmented encoder outputs for the augmented unlabeled audio samples, and generating predictions of the sequence of target branch outputs generated at the target branch. The method also includes determining an unsupervised loss term based on target branch outputs and predictions of the sequence of target branch outputs. The method also includes updating parameters of the audio encoder based on the unsupervised loss term.
-
公开(公告)号:US20230368779A1
公开(公告)日:2023-11-16
申请号:US18357225
申请日:2023-07-24
Applicant: Google LLC
Inventor: Anshuman Tripathi , Hasim Sak , Han Lu , Qian Zhang , Jaeyoung Kim
CPC classification number: G10L15/16 , G06N3/088 , G10L15/063 , G10L15/22 , G10L15/30 , G06N3/04 , G10L15/197
Abstract: A transformer-transducer model for unifying streaming and non-streaming speech recognition includes an audio encoder, a label encoder, and a joint network. The audio encoder receives a sequence of acoustic frames, and generates, at each of a plurality of time steps, a higher order feature representation for a corresponding acoustic frame. The label encoder receives a sequence of non-blank symbols output by a final softmax layer, and generates, at each of the plurality of time steps, a dense representation. The joint network receives the higher order feature representation and the dense representation at each of the plurality of time steps, and generates a probability distribution over possible speech recognition hypothesis. The audio encoder of the model further includes a neural network having an initial stack of transformer layers trained with zero look ahead audio context, and a final stack of transformer layers trained with a variable look ahead audio context.
-
-
-
-
-
-
-
-
-