UNSUPERVISED FEDERATED LEARNING OF MACHINE LEARNING MODEL LAYERS

    公开(公告)号:US20220270590A1

    公开(公告)日:2022-08-25

    申请号:US16973605

    申请日:2020-07-20

    Applicant: Google LLC

    Abstract: Implementations disclosed herein are directed to unsupervised federated training of global machine learning (“ML”) model layers that, after the federated training, can be combined with additional layer(s), thereby resulting in a combined ML model. Processor(s) can: detect audio data that captures a spoken utterance of a user of a client device; process, using a local ML model, the audio data to generate predicted output(s); generate, using unsupervised learning locally at the client device, a gradient based on the predicted output(s); transmit the gradient to a remote system; update weight(s) of the global ML model layers based on the gradient; subsequent to updating the weight(s), train, using supervised learning remotely at the remote system, a combined ML model that includes the updated global ML model layers and additional layer(s); transmit the combined ML model to the client device; and use the combined ML model to make prediction(s) at the client device.

    Using corrections, of automated assistant functions, for training of on-device machine learning models

    公开(公告)号:US12272360B2

    公开(公告)日:2025-04-08

    申请号:US18657405

    申请日:2024-05-07

    Applicant: GOOGLE LLC

    Abstract: Processor(s) of a client device can: receive sensor data that captures environmental attributes of an environment of the client device; process the sensor data using a machine learning model to generate a predicted output that dictates whether one or more currently dormant automated assistant functions are activated; making a decision as to whether to trigger the one or more currently dormant automated assistant functions; subsequent to making the decision, determining that the decision was incorrect; and in response to determining that the determination was incorrect, generating a gradient based on comparing the predicted output to ground truth output. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.

    ON-DEVICE SPEECH SYNTHESIS OF TEXTUAL SEGMENTS FOR TRAINING OF ON-DEVICE SPEECH RECOGNITION MODEL

    公开(公告)号:US20240290317A1

    公开(公告)日:2024-08-29

    申请号:US18656197

    申请日:2024-05-06

    Applicant: GOOGLE LLC

    CPC classification number: G10L13/047 G10L15/063 G10L2015/0635

    Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using a speech synthesis model stored locally at the client device, to generate synthesized speech audio data that includes synthesized speech of the identified textual segment; process the synthesized speech, using an on-device speech recognition model that is stored locally at the client device, to generate predicted output; and generate a gradient based on comparing the predicted output to ground truth output that corresponds to the textual segment. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.

    SYSTEM(S) AND METHOD(S) TO REDUCE A TRANSFERABLE SIZE OF LANGUAGE MODEL(S) TO ENABLE DECENTRALIZED LEARNING THEREOF

    公开(公告)号:US20240265269A1

    公开(公告)日:2024-08-08

    申请号:US18125613

    申请日:2023-03-23

    Applicant: GOOGLE LLC

    CPC classification number: G06N3/098 G06F40/40 G06N3/044

    Abstract: Implementations disclosed herein are directed to techniques for enabling decentralized learning of global language models (LMs). Remote processor(s) of a remote system can obtain a global LM that includes a global embedding matrix, generate a global embedding mask for the global embedding matrix using a masking technique, apply the global embedding mask to global embedding matrix to generate a sparsified global LM that includes a masked global embedding matrix that is a masked version of the global embedding matrix, transmit the sparsified global LM to computing device(s) that are participating in a given round of decentralized learning for the global language model, receive corresponding updates from the computing device(s), and cause the global LM to be updated based on the corresponding updates. By generating the global embedding mask and applying it to the global embedding matrix, the transferable size of the global LM is reduced thereby enabling decentralized learning thereof.

    EPHEMERAL LEARNING OF MACHINE LEARNING MODEL(S)

    公开(公告)号:US20230156248A1

    公开(公告)日:2023-05-18

    申请号:US17533779

    申请日:2021-11-23

    Applicant: GOOGLE LLC

    CPC classification number: H04N21/233 G06N20/00 G06K9/6256 H04N21/232

    Abstract: Implementations disclosed herein are directed to ephemeral learning of machine learning (“ML”) model(s) based on gradient(s) generated at a remote system (e.g., remote server(s)). Processor(s) of the remote system can receive stream(s) of audio data capturing spoken utterance(s) from a client device of a user. A fulfillment pipeline can process the stream(s) of audio data to cause certain fulfillment(s) of the spoken utterance(s) to be performed. Meanwhile, a training pipeline can process the stream(s) of audio data to generate gradient(s) using unsupervised learning techniques. Subsequent to the processing by the fulfillment pipeline and/or the training pipeline, the stream(s) of audio data are discarded by the remote system. Accordingly, the ML model(s) can be trained at the remote system without storing or logging of the stream(s) of audio data by non-transient memory thereof, thereby providing more efficient training mechanisms for training the ML model(s) and also increasing security of user data.

    MULTI-STREAM RECURRENT NEURAL NETWORK TRANSDUCER(S)

    公开(公告)号:US20220405549A1

    公开(公告)日:2022-12-22

    申请号:US17619643

    申请日:2020-12-15

    Applicant: GOOGLE LLC

    Abstract: Techniques are disclosed that enable generating jointly probable output by processing input using a multi-stream recurrent neural network transducer (MS RNN-T) model. Various implementations include generating a first output sequence and a second output sequence by processing a single input sequence using the MS RNN-T, where the first output sequence is jointly probable with the second output sequence. Additional or alternative techniques are disclosed that enable generating output by processing multiple input sequences using the MS RNN-T. Various implementations include processing a first input sequence and a second input sequence using the MS RNN-T to generate output. In some implementations, the MS RNN-T can be used to process two or more input sequences to generate two or more jointly probable output sequences.

    MIXED CLIENT-SERVER FEDERATED LEARNING OF MACHINE LEARNING MODEL(S)

    公开(公告)号:US20220293093A1

    公开(公告)日:2022-09-15

    申请号:US17197954

    申请日:2021-03-10

    Applicant: Google LLC

    Abstract: Implementations disclosed herein are directed to federated learning of machine learning (“ML”) model(s) based on gradient(s) generated at corresponding client devices and a remote system. Processor(s) of the corresponding client devices can process client data generated locally at the corresponding client devices using corresponding on-device ML model(s) to generate corresponding predicted outputs, generate corresponding client gradients based on the corresponding predicted outputs, and transmit the corresponding client gradients to the remote system. Processor(s) of the remote system can process remote data obtained from remote database(s) using global ML model(s) to generate additional corresponding predicted outputs, generate corresponding remote gradients based on the additional corresponding predicted outputs. Further, the remote system can utilize the corresponding client gradients and the corresponding remote gradients to update the global ML model(s) or weights thereof. The updated global ML model(s) and/or the updated weights thereof can be transmitted back to the corresponding client devices.

    On-device speech synthesis of textual segments for training of on-device speech recognition model

    公开(公告)号:US11127392B2

    公开(公告)日:2021-09-21

    申请号:US16959546

    申请日:2019-10-02

    Applicant: Google LLC

    Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using a speech synthesis model stored locally at the client device, to generate synthesized speech audio data that includes synthesized speech of the identified textual segment; process the synthesized speech, using an on-device speech recognition model that is stored locally at the client device, to generate predicted output; and generate a gradient based on comparing the predicted output to ground truth output that corresponds to the textual segment. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.

    Using corrections, of predicted textual segments of spoken utterances, for training of on-device speech recognition model

    公开(公告)号:US12183321B2

    公开(公告)日:2024-12-31

    申请号:US18377122

    申请日:2023-10-05

    Applicant: GOOGLE LLC

    Abstract: Processor(s) of a client device can: receive audio data that captures a spoken utterance of a user of the client device; process, using an on-device speech recognition model, the audio data to generate a predicted textual segment that is a prediction of the spoken utterance; cause at least part of the predicted textual segment to be rendered (e.g., visually and/or audibly); receive further user interface input that is a correction of the predicted textual segment to an alternate textual segment; and generate a gradient based on comparing at least part of the predicted output to ground truth output that corresponds to the alternate textual segment. The gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model and/or is transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.

    FLY PARAMETER COMPRESSION AND DECOMPRESSION TO FACILITATE FORWARD AND/OR BACK PROPAGATION AT CLIENTS DURING FEDERATED LEARNING

    公开(公告)号:US20240371362A1

    公开(公告)日:2024-11-07

    申请号:US18652587

    申请日:2024-05-01

    Applicant: GOOGLE LLC

    Abstract: Implementations are directed to efficient federated learning of machine learning (ML) model(s) through on-the-fly decompression and compression of model parameters, of the ML model(s), when facilitating forward propagation and/or back propagation at client device(s). For example, implementations can transmit, from a remote system to a client device, a compressed on-device ML model that includes some compressed parameters. Further, the client device can, in performing forward propagation and/or back propagation using the on-device ML model, decompress those compressed parameters on-the-fly as the parameters are needed for the propagation. The propagation will utilize the decompressed parameters that were decompressed on the fly. Further, after the decompressed parameters are utilized, they can be deallocated from memory (while their compressed counterparts optionally remain in memory) to enable allocation of memory for further decompressed parameters that will be needed next and/or needed for other ongoing process(es).

Patent Agency Ranking