UNSUPERVISED FEDERATED LEARNING OF MACHINE LEARNING MODEL LAYERS

    公开(公告)号:US20220270590A1

    公开(公告)日:2022-08-25

    申请号:US16973605

    申请日:2020-07-20

    Applicant: Google LLC

    Abstract: Implementations disclosed herein are directed to unsupervised federated training of global machine learning (“ML”) model layers that, after the federated training, can be combined with additional layer(s), thereby resulting in a combined ML model. Processor(s) can: detect audio data that captures a spoken utterance of a user of a client device; process, using a local ML model, the audio data to generate predicted output(s); generate, using unsupervised learning locally at the client device, a gradient based on the predicted output(s); transmit the gradient to a remote system; update weight(s) of the global ML model layers based on the gradient; subsequent to updating the weight(s), train, using supervised learning remotely at the remote system, a combined ML model that includes the updated global ML model layers and additional layer(s); transmit the combined ML model to the client device; and use the combined ML model to make prediction(s) at the client device.

    MULTI-DIALECT AND MULTILINGUAL SPEECH RECOGNITION

    公开(公告)号:US20220130374A1

    公开(公告)日:2022-04-28

    申请号:US17572238

    申请日:2022-01-10

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.

    Joint unsupervised and supervised training for multilingual ASR

    公开(公告)号:US12249317B2

    公开(公告)日:2025-03-11

    申请号:US17929934

    申请日:2022-09-06

    Applicant: Google LLC

    Abstract: A method includes receiving audio features and generating a latent speech representation based on the audio features. The method also includes generating a target quantized vector token and a target token index for a corresponding latent speech representation. The method also includes generating a contrastive context vector for a corresponding unmasked or masked latent speech representation and deriving a contrastive self-supervised loss based on the corresponding contrastive context vector and the corresponding target quantized vector token. The method also include generating a high-level context vector based on the contrastive context vector and, for each high-level context vector, learning to predict the target token index at the corresponding time step using a cross-entropy loss based on the target token index. The method also includes predicting speech recognition hypotheses for the utterance and training a multilingual automatic speech recognition (ASR) model using an unsupervised loss and a supervised loss.

    Ephemeral learning of machine learning model(s)

    公开(公告)号:US12126845B2

    公开(公告)日:2024-10-22

    申请号:US17533779

    申请日:2021-11-23

    Applicant: GOOGLE LLC

    CPC classification number: H04N21/233 G06F18/214 G06N20/00 H04N21/232

    Abstract: Implementations disclosed herein are directed to ephemeral learning of machine learning (“ML”) model(s) based on gradient(s) generated at a remote system (e.g., remote server(s)). Processor(s) of the remote system can receive stream(s) of audio data capturing spoken utterance(s) from a client device of a user. A fulfillment pipeline can process the stream(s) of audio data to cause certain fulfillment(s) of the spoken utterance(s) to be performed. Meanwhile, a training pipeline can process the stream(s) of audio data to generate gradient(s) using unsupervised learning techniques. Subsequent to the processing by the fulfillment pipeline and/or the training pipeline, the stream(s) of audio data are discarded by the remote system. Accordingly, the ML model(s) can be trained at the remote system without storing or logging of the stream(s) of audio data by non-transient memory thereof, thereby providing more efficient training mechanisms for training the ML model(s) and also increasing security of user data.

    ON-DEVICE PERSONALIZATION OF SPEECH SYNTHESIS FOR TRAINING OF SPEECH RECOGNITION MODEL(S)

    公开(公告)号:US20230068897A1

    公开(公告)日:2023-03-02

    申请号:US17983671

    申请日:2022-11-09

    Applicant: GOOGLE LLC

    Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using an on-device TTS generator model, to generate synthesized speech audio data that includes synthesized speech of the textual segment; process the synthesized speech, using an on-device ASR model to generate predicted ASR output; and generate a gradient based on comparing the predicted ASR output to ground truth output corresponding to the textual segment. Processor(s) of the client device can also: process the synthesized speech audio data using an on-device TTS generator model to make a prediction; and generate a gradient based on the prediction. In these implementations, the generated gradient(s) can be used to update weight(s) of the respective on-device model(s) and/or transmitted to a remote system for use in remote updating of respective global model(s). The updated weight(s) and/or the updated model(s) can be transmitted to client device(s).

    On-device personalization of speech synthesis for training of speech model(s)

    公开(公告)号:US11545133B2

    公开(公告)日:2023-01-03

    申请号:US17082518

    申请日:2020-10-28

    Applicant: Google LLC

    Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using an on-device TTS generator model, to generate synthesized speech audio data that includes synthesized speech of the textual segment; process the synthesized speech, using an on-device ASR model to generate predicted ASR output; and generate a gradient based on comparing the predicted ASR output to ground truth output corresponding to the textual segment. Processor(s) of the client device can also: process the synthesized speech audio data using an on-device TTS generator model to make a prediction; and generate a gradient based on the prediction. In these implementations, the generated gradient(s) can be used to update weight(s) of the respective on-device model(s) and/or transmitted to a remote system for use in remote updating of respective global model(s). The updated weight(s) and/or the updated model(s) can be transmitted to client device(s).

    ON-DEVICE SPEECH SYNTHESIS OF TEXTUAL SEGMENTS FOR TRAINING OF ON-DEVICE SPEECH RECOGNITION MODEL

    公开(公告)号:US20240290317A1

    公开(公告)日:2024-08-29

    申请号:US18656197

    申请日:2024-05-06

    Applicant: GOOGLE LLC

    CPC classification number: G10L13/047 G10L15/063 G10L2015/0635

    Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using a speech synthesis model stored locally at the client device, to generate synthesized speech audio data that includes synthesized speech of the identified textual segment; process the synthesized speech, using an on-device speech recognition model that is stored locally at the client device, to generate predicted output; and generate a gradient based on comparing the predicted output to ground truth output that corresponds to the textual segment. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.

    Sound model localization within an environment

    公开(公告)号:US12073319B2

    公开(公告)日:2024-08-27

    申请号:US16940294

    申请日:2020-07-27

    Applicant: GOOGLE LLC

    CPC classification number: G06N3/08 G06N3/047 G10L25/51

    Abstract: Systems and techniques are provided for sound model localization within an environment. Sound recordings of sounds in the environment may be received from devices in the environment. Preliminary labels for the sound recordings may be determined using pre-trained sound models. The preliminary labels may have associated probabilities. Sound clips with preliminary labels may be generated based on sound recordings that have preliminary labels whose probability is over a high-recall threshold for the pre-trained sound model that determined the preliminary label. The sound clips with preliminary labels may be sent to a user device. Labeled sound clips may be received from the user device. The labeled sound clips may be based on the sound clips with preliminary labels. Training data sets may be generated for the pre-trained sound models using the labeled sound clips. The pre-trained sound models may be trained using the training data sets to generate localized sound models.

Patent Agency Ranking