-
公开(公告)号:US20200160836A1
公开(公告)日:2020-05-21
申请号:US16684483
申请日:2019-11-14
Applicant: GOOGLE LLC
Inventor: Zhifeng Chen , Bo Li , Eugene Weinstein , Yonghui Wu , Pedro J. Moreno Mengibar , Ron J. Weiss , Khe Chai Sim , Tara N. Sainath , Patrick An Phu Nguyen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.
-
公开(公告)号:US12254865B2
公开(公告)日:2025-03-18
申请号:US18418246
申请日:2024-01-20
Applicant: Google LLC
Inventor: Zhifeng Chen , Bo Li , Eugene Weinstein , Yonghui Wu , Pedro J. Moreno Mengibar , Ron J. Weiss , Khe Chai Sim , Tara N. Sainath , Patrick An Phu Nguyen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.
-
公开(公告)号:US20250016387A1
公开(公告)日:2025-01-09
申请号:US18890050
申请日:2024-09-19
Applicant: GOOGLE LLC
Inventor: Françoise Beaufays , Khe Chai Sim , Trevor Strohman , Oren Litvin
IPC: H04N21/233 , G06F18/214 , G06N20/00 , H04N21/232
Abstract: Implementations disclosed herein are directed to ephemeral learning of machine learning (“ML”) model(s) based on gradient(s) generated at a remote system (e.g., remote server(s)). Processor(s) of the remote system can receive stream(s) of audio data capturing spoken utterance(s) from a client device of a user. A fulfillment pipeline can process the stream(s) of audio data to cause certain fulfillment(s) of the spoken utterance(s) to be performed. Meanwhile, a training pipeline can process the stream(s) of audio data to generate gradient(s) using unsupervised learning techniques. Subsequent to the processing by the fulfillment pipeline and/or the training pipeline, the stream(s) of audio data are discarded by the remote system. Accordingly, the ML model(s) can be trained at the remote system without storing or logging of the stream(s) of audio data by non-transient memory thereof, thereby providing more efficient training mechanisms for training the ML model(s) and also increasing security of user data.
-
公开(公告)号:US20240296834A1
公开(公告)日:2024-09-05
申请号:US18659940
申请日:2024-05-09
Applicant: GOOGLE LLC
Inventor: Françoise Beaufays , Khe Chai Sim , Johan Schalkwyk
IPC: G10L15/06 , G10L15/187 , G10L15/22 , G10L15/30
CPC classification number: G10L15/063 , G10L15/187 , G10L15/22 , G10L15/30 , G10L2015/0635
Abstract: Implementations disclosed herein are directed to unsupervised federated training of global machine learning (“ML”) model layers that, after the federated training, can be combined with additional layer(s), thereby resulting in a combined ML model. Processor(s) can: detect audio data that captures a spoken utterance of a user of a client device; process, using a local ML model, the audio data to generate predicted output(s); generate, using unsupervised learning locally at the client device, a gradient based on the predicted output(s); transmit the gradient to a remote system; update weight(s) of the global ML model layers based on the gradient; subsequent to updating the weight(s), train, using supervised learning remotely at the remote system, a combined ML model that includes the updated global ML model layers and additional layer(s); transmit the combined ML model to the client device; and use the combined ML model to make prediction(s) at the client device.
-
公开(公告)号:US20240161732A1
公开(公告)日:2024-05-16
申请号:US18418246
申请日:2024-01-20
Applicant: Google LLC
Inventor: Zhifeng Chen , Bo Li , Eugene Weinstein , Yonghui Wu , Pedro J. Moreno Mengibar , Ron J. Weiss , Khe Chai Sim , Tara N. Sainath , Patrick An Phu Nguyen
CPC classification number: G10L15/005 , G10L15/07 , G10L15/16 , G10L2015/0631
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.
-
26.
公开(公告)号:US11978432B2
公开(公告)日:2024-05-07
申请号:US18204324
申请日:2023-05-31
Applicant: GOOGLE LLC
Inventor: Françoise Beaufays , Johan Schalkwyk , Khe Chai Sim
IPC: G10L13/047 , G10L15/06
CPC classification number: G10L13/047 , G10L15/063 , G10L2015/0635
Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using a speech synthesis model stored locally at the client device, to generate synthesized speech audio data that includes synthesized speech of the identified textual segment; process the synthesized speech, using an on-device speech recognition model that is stored locally at the client device, to generate predicted output; and generate a gradient based on comparing the predicted output to ground truth output that corresponds to the textual segment. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
-
公开(公告)号:US20240029720A1
公开(公告)日:2024-01-25
申请号:US18340175
申请日:2023-06-23
Applicant: Google LLC
Inventor: David Qiu , Tsendsuren Munkhdalai , Yangzhang He , Khe Chai Sim
CPC classification number: G10L15/16 , G10L15/02 , G10L15/22 , G10L15/063 , G10L15/19
Abstract: An automatic speech recognition (ASR) system that includes an ASR model, a neural associative memory (NAM) biasing model, and a confidence estimation model (CEM). The ASR model includes an audio encoder configured to encode a sequence of audio frames characterizing a spoken utterance into a sequence of higher-order feature representations, and a decoder configured to receive the sequence of higher-order feature representations and output a final speech recognition result. The NAM biasing model is configured to receive biasing contextual information and modify the sequence of higher-order feature representations based on the biasing contextual information to generate, as output, biasing context vectors. The CEM is configured to compute a confidence of the final speech recognition result output by the decoder. The CEM is connected to the biasing context vectors generated by the NAM biasing model.
-
28.
公开(公告)号:US20230306955A1
公开(公告)日:2023-09-28
申请号:US18204324
申请日:2023-05-31
Applicant: GOOGLE LLC
Inventor: Françoise Beaufays , Johan Schalkwyk , Khe Chai Sim
IPC: G10L13/047 , G10L15/06
CPC classification number: G10L13/047 , G10L15/063 , G10L2015/0635
Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using a speech synthesis model stored locally at the client device, to generate synthesized speech audio data that includes synthesized speech of the identified textual segment; process the synthesized speech, using an on-device speech recognition model that is stored locally at the client device, to generate predicted output; and generate a gradient based on comparing the predicted output to ground truth output that corresponds to the textual segment. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
-
29.
公开(公告)号:US11705106B2
公开(公告)日:2023-07-18
申请号:US17479285
申请日:2021-09-20
Applicant: Google LLC
Inventor: Françoise Beaufays , Johan Schalkwyk , Khe Chai Sim
IPC: G10L13/047 , G10L15/06
CPC classification number: G10L13/047 , G10L15/063 , G10L2015/0635
Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using a speech synthesis model stored locally at the client device, to generate synthesized speech audio data that includes synthesized speech of the identified textual segment; process the synthesized speech, using an on-device speech recognition model that is stored locally at the client device, to generate predicted output; and generate a gradient based on comparing the predicted output to ground truth output that corresponds to the textual segment. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
-
30.
公开(公告)号:US20230177382A1
公开(公告)日:2023-06-08
申请号:US17541091
申请日:2021-12-02
Applicant: GOOGLE LLC
Inventor: Françoise Beaufays , Giovanni Motta , Khe Chai Sim
CPC classification number: G06N20/00 , G06K9/6262 , H04L67/10
Abstract: Implementations disclosed herein are directed to efficient federated learning of machine learning (ML) model(s) at a remote system (e.g., remote server(s)) based on update(s) generated at client device(s). Processor(s) of the client device(s) can receive client data, process, using on-device ML model(s), the client data to generate predicted output(s), generate, using unsupervised learning, gradient(s) based on the predicted output(s), generate, based on the gradient(s), the update(s) for disparate portions of the on-device ML model(s) and/or global ML model(s) that are remote-based counterparts of the on-device ML model(s). Further, processor(s) of the remote system can receive, from the client device(s), the update(s) for the disparate portions of the on-device ML model(s), and cause the global ML model(s) to be updated based on the update(s) for the disparate portions of the on-device ML model(s) received from disparate client device(s). Thus, resources consumed at the client device(s) and/or network resources can be reduced.
-
-
-
-
-
-
-
-
-