Transformer Transducer: One Model Unifying Streaming And Non-Streaming Speech Recognition

    公开(公告)号:US20220108689A1

    公开(公告)日:2022-04-07

    申请号:US17210465

    申请日:2021-03-23

    Applicant: Google LLC

    Abstract: A transformer-transducer model for unifying streaming and non-streaming speech recognition includes an audio encoder, a label encoder, and a joint network. The audio encoder receives a sequence of acoustic frames, and generates, at each of a plurality of time steps, a higher order feature representation for a corresponding acoustic frame. The label encoder receives a sequence of non-blank symbols output by a final softmax layer, and generates, at each of the plurality of time steps, a dense representation. The joint network receives the higher order feature representation and the dense representation at each of the plurality of time steps, and generates a probability distribution over possible speech recognition hypothesis. The audio encoder of the model further includes a neural network having an initial stack of transformer layers trained with zero look ahead audio context, and a final stack of transformer layers trained with a variable look ahead audio context.

    WORD-LEVEL END-TO-END NEURAL SPEAKER DIARIZATION WITH AUXNET

    公开(公告)号:US20250118292A1

    公开(公告)日:2025-04-10

    申请号:US18891045

    申请日:2024-09-20

    Applicant: Google LLC

    Abstract: A method includes obtaining labeled training data including a plurality of spoken terms spoken during a conversation. For each respective spoken term, the method includes generating a corresponding sequence of intermediate audio encodings from a corresponding sequence of acoustic frames, generating a corresponding sequence of final audio encodings from the corresponding sequence of intermediate audio encodings, generating a corresponding speech recognition result, and generating a respective speaker token representing a predicted identity of a speaker for each corresponding speech recognition result. The method also includes training the joint speech recognition and speaker diarization model jointly based on a first loss derived from the generated speech recognition results and the corresponding transcriptions and a second loss derived from the generated speaker tokens and the corresponding speaker labels.

    End-to-end multi-talker overlapping speech recognition

    公开(公告)号:US12266347B2

    公开(公告)日:2025-04-01

    申请号:US18055553

    申请日:2022-11-15

    Applicant: Google LLC

    Abstract: A method for training a speech recognition model with a loss function includes receiving an audio signal including a first segment corresponding to audio spoken by a first speaker, a second segment corresponding to audio spoken by a second speaker, and an overlapping region where the first segment overlaps the second segment. The overlapping region includes a known start time and a known end time. The method also includes generating a respective masked audio embedding for each of the first and second speakers. The method also includes applying a masking loss after the known end time to the respective masked audio embedding for the first speaker when the first speaker was speaking prior to the known start time, or applying the masking loss prior to the known start time when the first speaker was speaking after the known end time.

    One model unifying streaming and non-streaming speech recognition

    公开(公告)号:US12254869B2

    公开(公告)日:2025-03-18

    申请号:US18357225

    申请日:2023-07-24

    Applicant: Google LLC

    Abstract: A transformer-transducer model for unifying streaming and non-streaming speech recognition includes an audio encoder, a label encoder, and a joint network. The audio encoder receives a sequence of acoustic frames, and generates, at each of a plurality of time steps, a higher order feature representation for a corresponding acoustic frame. The label encoder receives a sequence of non-blank symbols output by a final softmax layer, and generates, at each of the plurality of time steps, a dense representation. The joint network receives the higher order feature representation and the dense representation at each of the plurality of time steps, and generates a probability distribution over possible speech recognition hypothesis. The audio encoder of the model further includes a neural network having an initial stack of transformer layers trained with zero look ahead audio context, and a final stack of transformer layers trained with a variable look ahead audio context.

    Reducing streaming ASR model delay with self alignment

    公开(公告)号:US12057124B2

    公开(公告)日:2024-08-06

    申请号:US17644377

    申请日:2021-12-15

    Applicant: Google LLC

    CPC classification number: G10L15/26 G10L15/16

    Abstract: A streaming speech recognition model includes an audio encoder configured to receive a sequence of acoustic frames and generate a higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The streaming speech recognition model also includes a label encoder configured to receive a sequence of non-blank symbols output by a final softmax layer and generate a dense representation. The streaming speech recognition model also includes a joint network configured to receive the higher order feature representation generated by the audio encoder and the dense representation generated by the label encoder and generate a probability distribution over possible speech recognition hypotheses. Here, the streaming speech recognition model is trained using self-alignment to reduce prediction delay by encouraging an alignment path that is one frame left from a reference forced-alignment frame.

    Contrastive Siamese Network for Semi-supervised Speech Recognition

    公开(公告)号:US20240242712A1

    公开(公告)日:2024-07-18

    申请号:US18619684

    申请日:2024-03-28

    Applicant: Google LLC

    CPC classification number: G10L15/16 G06N3/088 G10L15/1815

    Abstract: A method includes receiving a plurality of unlabeled audio samples corresponding to spoken utterances not paired with corresponding transcriptions. At a target branch of a contrastive Siamese network, the method also includes generating a sequence of encoder outputs for the plurality of unlabeled audio samples and modifying time characteristics of the encoder outputs to generate a sequence of target branch outputs. At an augmentation branch of a contrastive Siamese network, the method also includes performing augmentation on the unlabeled audio samples, generating a sequence of augmented encoder outputs for the augmented unlabeled audio samples, and generating predictions of the sequence of target branch outputs generated at the target branch. The method also includes determining an unsupervised loss term based on target branch outputs and predictions of the sequence of target branch outputs. The method also includes updating parameters of the audio encoder based on the unsupervised loss term.

    Monte Carlo Self-Training for Speech Recognition

    公开(公告)号:US20240177706A1

    公开(公告)日:2024-05-30

    申请号:US18515212

    申请日:2023-11-20

    Applicant: Google LLC

    CPC classification number: G10L15/063 G10L15/065 G10L15/10 G10L2015/0635

    Abstract: A method for training a sequence transduction model includes receiving a sequence of unlabeled input features extracted from unlabeled input samples. Using a teacher branch of an unsupervised subnetwork, the method includes processing the sequence of input features to predict probability distributions over possible teacher branch output labels, sampling one or more sequences of teacher branch output labels, and determining a sequence of pseudo output labels based on the one or more sequences of teacher branch output labels. Using a student branch that includes a student encoder of the unsupervised subnetwork, the method includes processing the sequence of input 10 features to predict probability distributions over possible student branch output labels, determining a negative log likelihood term based on the predicted probability distributions over possible student branch output labels and the sequence of pseudo output labels, and updating parameters of the student encoder.

    Contrastive Siamese Network for Semi-supervised Speech Recognition

    公开(公告)号:US20230096805A1

    公开(公告)日:2023-03-30

    申请号:US17644337

    申请日:2021-12-14

    Applicant: Google LLC

    Abstract: A method includes receiving a plurality of unlabeled audio samples corresponding to spoken utterances not paired with corresponding transcriptions. At a target branch of a contrastive Siamese network, the method also includes generating a sequence of encoder outputs for the plurality of unlabeled audio samples and modifying time characteristics of the encoder outputs to generate a sequence of target branch outputs. At an augmentation branch of a contrastive Siamese network, the method also includes performing augmentation on the unlabeled audio samples, generating a sequence of augmented encoder outputs for the augmented unlabeled audio samples, and generating predictions of the sequence of target branch outputs generated at the target branch. The method also includes determining an unsupervised loss term based on target branch outputs and predictions of the sequence of target branch outputs. The method also includes updating parameters of the audio encoder based on the unsupervised loss term.

    Speaker-Turn-Based Online Speaker Diarization with Constrained Spectral Clustering

    公开(公告)号:US20230089308A1

    公开(公告)日:2023-03-23

    申请号:US17644261

    申请日:2021-12-14

    Applicant: Google LLC

    Abstract: A method includes receiving an input audio signal that corresponds to utterances spoken by multiple speakers. The method also includes processing the input audio to generate a transcription of the utterances and a sequence of speaker turn tokens each indicating a location of a respective speaker turn. The method also includes segmenting the input audio signal into a plurality of speaker segments based on the sequence of speaker tokens. The method also includes extracting a speaker-discriminative embedding from each speaker segment and performing spectral clustering on the speaker-discriminative embeddings to cluster the plurality of speaker segments into k classes. The method also includes assigning a respective speaker label to each speaker segment clustered into the respective class that is different than the respective speaker label assigned to the speaker segments clustered into each other class of the k classes.

Patent Agency Ranking