-
公开(公告)号:US10157040B2
公开(公告)日:2018-12-18
申请号:US14988408
申请日:2016-01-05
Applicant: Google LLC
Inventor: Brandon M. Ballinger , Johan Schalkwyk , Michael H. Cohen , William J. Byrne , Gudmundur Hafsteinsson , Michael J. LeBeau
IPC: G06F17/28 , G06F3/16 , G10L15/183 , G10L15/26 , G10L15/30 , G10L15/18 , G06F3/0488 , G06F17/27 , G10L15/00 , G10L15/22 , G10L15/197
Abstract: A computer-implemented input-method editor process includes receiving a request from a user for an application-independent input method editor having written and spoken input capabilities, identifying that the user is about to provide spoken input to the application-independent input method editor, and receiving a spoken input from the user. The spoken input corresponds to input to an application and is converted to text that represents the spoken input. The text is provided as input to the application.
-
公开(公告)号:US12249345B2
公开(公告)日:2025-03-11
申请号:US18074739
申请日:2022-12-05
Applicant: GOOGLE LLC
Inventor: Johan Schalkwyk , Blaise Aguera-Arcas , Diego Melendo Casado , Oren Litvin
Abstract: Implementations disclosed herein are directed to utilizing ephemeral learning techniques and/or federated learning techniques to update audio-based machine learning (ML) model(s) based on processing streams of audio data generated via radio station(s) across the world. This enables the audio-based ML model(s) to learn representations and/or understand languages across the world, including tail languages for which there is no/minimal audio data. In various implementations, one or more deduping techniques may be utilized to ensure the same stream of audio data is not overutilized in updating the audio-based ML model(s). In various implementations, a given client device may determine whether to employ an ephemeral learning technique or a federated learning technique based on, for instance, a connection status with a remote system. Generally, the streams of audio data are received at client devices, but the ephemeral learning techniques may be implemented at the client device and/or at the remote system.
-
公开(公告)号:US12205578B2
公开(公告)日:2025-01-21
申请号:US17788183
申请日:2021-01-07
Applicant: GOOGLE LLC
Inventor: Fadi Biadsy , Johan Schalkwyk , Jason Pelecanos
Abstract: Implementations disclosed herein are directed to techniques for selectively enabling and/or disabling non-transient storage of one or more instances of assistant interaction data for turn(s) of a dialog between a user and an automated assistant. Implementations are additionally or alternatively directed to techniques for retroactive wiping of non-transiently stored assistant interaction data from previous assistant interaction(s).
-
公开(公告)号:US12169522B2
公开(公告)日:2024-12-17
申请号:US18177747
申请日:2023-03-02
Applicant: Google LLC
Inventor: Johan Schalkwyk , Francoise Beaufays
IPC: G06F16/783 , G06F16/738 , G06F40/169 , G06F40/30
Abstract: A method includes receiving a content feed that includes audio data corresponding to speech utterances and processing the content feed to generate a semantically-rich, structured document. The structured document includes a transcription of the speech utterances and includes a plurality of words each aligned with a corresponding audio segment of the audio data that indicates a time when the word was recognized in the audio data. During playback of the content feed, the method also includes receiving a query from a user requesting information contained in the content feed and processing, by a large language model, the query and the structured document to generate a response to the query. The response conveys the requested information contained in the content feed. The method also includes providing, for output from a user device associated with the user, the response to the query.
-
公开(公告)号:US12094472B2
公开(公告)日:2024-09-17
申请号:US18345077
申请日:2023-06-30
Applicant: GOOGLE LLC
Inventor: Alexander H. Gruenstein , Petar Aleksic , Johan Schalkwyk , Pedro J. Moreno Mengibar
CPC classification number: G10L15/30 , G10L15/26 , G10L15/32 , G10L2015/088 , G10L15/183 , G10L2015/223
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting hotwords using a server. One of the methods includes receiving an audio signal encoding one or more utterances including a first utterance; determining whether at least a portion of the first utterance satisfies a first threshold of being at least a portion of a key phrase; in response to determining that at least the portion of the first utterance satisfies the first threshold of being at least a portion of a key phrase, sending the audio signal to a server system that determines whether the first utterance satisfies a second threshold of being the key phrase, the second threshold being more restrictive than the first threshold; and receiving tagged text data representing the one or more utterances encoded in the audio signal when the server system determines that the first utterance satisfies the second threshold.
-
公开(公告)号:US20240296834A1
公开(公告)日:2024-09-05
申请号:US18659940
申请日:2024-05-09
Applicant: GOOGLE LLC
Inventor: Françoise Beaufays , Khe Chai Sim , Johan Schalkwyk
IPC: G10L15/06 , G10L15/187 , G10L15/22 , G10L15/30
CPC classification number: G10L15/063 , G10L15/187 , G10L15/22 , G10L15/30 , G10L2015/0635
Abstract: Implementations disclosed herein are directed to unsupervised federated training of global machine learning (“ML”) model layers that, after the federated training, can be combined with additional layer(s), thereby resulting in a combined ML model. Processor(s) can: detect audio data that captures a spoken utterance of a user of a client device; process, using a local ML model, the audio data to generate predicted output(s); generate, using unsupervised learning locally at the client device, a gradient based on the predicted output(s); transmit the gradient to a remote system; update weight(s) of the global ML model layers based on the gradient; subsequent to updating the weight(s), train, using supervised learning remotely at the remote system, a combined ML model that includes the updated global ML model layers and additional layer(s); transmit the combined ML model to the client device; and use the combined ML model to make prediction(s) at the client device.
-
57.
公开(公告)号:US11978432B2
公开(公告)日:2024-05-07
申请号:US18204324
申请日:2023-05-31
Applicant: GOOGLE LLC
Inventor: Françoise Beaufays , Johan Schalkwyk , Khe Chai Sim
IPC: G10L13/047 , G10L15/06
CPC classification number: G10L13/047 , G10L15/063 , G10L2015/0635
Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using a speech synthesis model stored locally at the client device, to generate synthesized speech audio data that includes synthesized speech of the identified textual segment; process the synthesized speech, using an on-device speech recognition model that is stored locally at the client device, to generate predicted output; and generate a gradient based on comparing the predicted output to ground truth output that corresponds to the textual segment. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
-
公开(公告)号:US20240071406A1
公开(公告)日:2024-02-29
申请号:US18074739
申请日:2022-12-05
Applicant: GOOGLE LLC
Inventor: Johan Schalkwyk , Blaise Aguera-Arcas , Diego Melendo Casado , Oren Litvin
CPC classification number: G10L25/51 , G10L15/005 , G10L15/18
Abstract: Implementations disclosed herein are directed to utilizing ephemeral learning techniques and/or federated learning techniques to update audio-based machine learning (ML) model(s) based on processing streams of audio data generated via radio station(s) across the world. This enables the audio-based ML model(s) to learn representations and/or understand languages across the world, including tail languages for which there is no/minimal audio data. In various implementations, one or more deduping techniques may be utilized to ensure the same stream of audio data is not overutilized in updating the audio-based ML model(s). In various implementations, a given client device may determine whether to employ an ephemeral learning technique or a federated learning technique based on, for instance, a connection status with a remote system. Generally, the streams of audio data are received at client devices, but the ephemeral learning techniques may be implemented at the client device and/or at the remote system.
-
59.
公开(公告)号:US20240029711A1
公开(公告)日:2024-01-25
申请号:US18377122
申请日:2023-10-05
Applicant: Google LLC
Inventor: Françoise Beaufays , Johan Schalkwyk , Giovanni Motta
IPC: G10L15/00 , G06F3/04842 , G06F3/04883 , G10L25/51
CPC classification number: G10L15/00 , G06F3/04842 , G06F3/04883 , G10L25/51
Abstract: Processor(s) of a client device can: receive audio data that captures a spoken utterance of a user of the client device; process, using an on-device speech recognition model, the audio data to generate a predicted textual segment that is a prediction of the spoken utterance; cause at least part of the predicted textual segment to be rendered (e.g., visually and/or audibly); receive further user interface input that is a correction of the predicted textual segment to an alternate textual segment; and generate a gradient based on comparing at least part of the predicted output to ground truth output that corresponds to the alternate textual segment. The gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model and/or is transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
-
60.
公开(公告)号:US20230306955A1
公开(公告)日:2023-09-28
申请号:US18204324
申请日:2023-05-31
Applicant: GOOGLE LLC
Inventor: Françoise Beaufays , Johan Schalkwyk , Khe Chai Sim
IPC: G10L13/047 , G10L15/06
CPC classification number: G10L13/047 , G10L15/063 , G10L2015/0635
Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using a speech synthesis model stored locally at the client device, to generate synthesized speech audio data that includes synthesized speech of the identified textual segment; process the synthesized speech, using an on-device speech recognition model that is stored locally at the client device, to generate predicted output; and generate a gradient based on comparing the predicted output to ground truth output that corresponds to the textual segment. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.
-
-
-
-
-
-
-
-
-