Ephemeral learning and/or federated learning of audio-based machine learning model(s) from stream(s) of audio data generated via radio station(s)

    公开(公告)号:US12249345B2

    公开(公告)日:2025-03-11

    申请号:US18074739

    申请日:2022-12-05

    Applicant: GOOGLE LLC

    Abstract: Implementations disclosed herein are directed to utilizing ephemeral learning techniques and/or federated learning techniques to update audio-based machine learning (ML) model(s) based on processing streams of audio data generated via radio station(s) across the world. This enables the audio-based ML model(s) to learn representations and/or understand languages across the world, including tail languages for which there is no/minimal audio data. In various implementations, one or more deduping techniques may be utilized to ensure the same stream of audio data is not overutilized in updating the audio-based ML model(s). In various implementations, a given client device may determine whether to employ an ephemeral learning technique or a federated learning technique based on, for instance, a connection status with a remote system. Generally, the streams of audio data are received at client devices, but the ephemeral learning techniques may be implemented at the client device and/or at the remote system.

    Structured video documents
    54.
    发明授权

    公开(公告)号:US12169522B2

    公开(公告)日:2024-12-17

    申请号:US18177747

    申请日:2023-03-02

    Applicant: Google LLC

    Abstract: A method includes receiving a content feed that includes audio data corresponding to speech utterances and processing the content feed to generate a semantically-rich, structured document. The structured document includes a transcription of the speech utterances and includes a plurality of words each aligned with a corresponding audio segment of the audio data that indicates a time when the word was recognized in the audio data. During playback of the content feed, the method also includes receiving a query from a user requesting information contained in the content feed and processing, by a large language model, the query and the structured document to generate a response to the query. The response conveys the requested information contained in the content feed. The method also includes providing, for output from a user device associated with the user, the response to the query.

    Server side hotwording
    55.
    发明授权

    公开(公告)号:US12094472B2

    公开(公告)日:2024-09-17

    申请号:US18345077

    申请日:2023-06-30

    Applicant: GOOGLE LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting hotwords using a server. One of the methods includes receiving an audio signal encoding one or more utterances including a first utterance; determining whether at least a portion of the first utterance satisfies a first threshold of being at least a portion of a key phrase; in response to determining that at least the portion of the first utterance satisfies the first threshold of being at least a portion of a key phrase, sending the audio signal to a server system that determines whether the first utterance satisfies a second threshold of being the key phrase, the second threshold being more restrictive than the first threshold; and receiving tagged text data representing the one or more utterances encoded in the audio signal when the server system determines that the first utterance satisfies the second threshold.

    UNSUPERVISED FEDERATED LEARNING OF MACHINE LEARNING MODEL LAYERS

    公开(公告)号:US20240296834A1

    公开(公告)日:2024-09-05

    申请号:US18659940

    申请日:2024-05-09

    Applicant: GOOGLE LLC

    Abstract: Implementations disclosed herein are directed to unsupervised federated training of global machine learning (“ML”) model layers that, after the federated training, can be combined with additional layer(s), thereby resulting in a combined ML model. Processor(s) can: detect audio data that captures a spoken utterance of a user of a client device; process, using a local ML model, the audio data to generate predicted output(s); generate, using unsupervised learning locally at the client device, a gradient based on the predicted output(s); transmit the gradient to a remote system; update weight(s) of the global ML model layers based on the gradient; subsequent to updating the weight(s), train, using supervised learning remotely at the remote system, a combined ML model that includes the updated global ML model layers and additional layer(s); transmit the combined ML model to the client device; and use the combined ML model to make prediction(s) at the client device.

    On-device speech synthesis of textual segments for training of on-device speech recognition model

    公开(公告)号:US11978432B2

    公开(公告)日:2024-05-07

    申请号:US18204324

    申请日:2023-05-31

    Applicant: GOOGLE LLC

    CPC classification number: G10L13/047 G10L15/063 G10L2015/0635

    Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using a speech synthesis model stored locally at the client device, to generate synthesized speech audio data that includes synthesized speech of the identified textual segment; process the synthesized speech, using an on-device speech recognition model that is stored locally at the client device, to generate predicted output; and generate a gradient based on comparing the predicted output to ground truth output that corresponds to the textual segment. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.

    EPHEMERAL LEARNING AND/OR FEDERATED LEARNING OF AUDIO-BASED MACHINE LEARNING MODEL(S) FROM STREAM(S) OF AUDIO DATA GENERATED VIA RADIO STATION(S)

    公开(公告)号:US20240071406A1

    公开(公告)日:2024-02-29

    申请号:US18074739

    申请日:2022-12-05

    Applicant: GOOGLE LLC

    CPC classification number: G10L25/51 G10L15/005 G10L15/18

    Abstract: Implementations disclosed herein are directed to utilizing ephemeral learning techniques and/or federated learning techniques to update audio-based machine learning (ML) model(s) based on processing streams of audio data generated via radio station(s) across the world. This enables the audio-based ML model(s) to learn representations and/or understand languages across the world, including tail languages for which there is no/minimal audio data. In various implementations, one or more deduping techniques may be utilized to ensure the same stream of audio data is not overutilized in updating the audio-based ML model(s). In various implementations, a given client device may determine whether to employ an ephemeral learning technique or a federated learning technique based on, for instance, a connection status with a remote system. Generally, the streams of audio data are received at client devices, but the ephemeral learning techniques may be implemented at the client device and/or at the remote system.

    ON-DEVICE SPEECH SYNTHESIS OF TEXTUAL SEGMENTS FOR TRAINING OF ON-DEVICE SPEECH RECOGNITION MODEL

    公开(公告)号:US20230306955A1

    公开(公告)日:2023-09-28

    申请号:US18204324

    申请日:2023-05-31

    Applicant: GOOGLE LLC

    CPC classification number: G10L13/047 G10L15/063 G10L2015/0635

    Abstract: Processor(s) of a client device can: identify a textual segment stored locally at the client device; process the textual segment, using a speech synthesis model stored locally at the client device, to generate synthesized speech audio data that includes synthesized speech of the identified textual segment; process the synthesized speech, using an on-device speech recognition model that is stored locally at the client device, to generate predicted output; and generate a gradient based on comparing the predicted output to ground truth output that corresponds to the textual segment. In some implementations, the generated gradient is used, by processor(s) of the client device, to update weights of the on-device speech recognition model. In some implementations, the generated gradient is additionally or alternatively transmitted to a remote system for use in remote updating of global weights of a global speech recognition model.

Patent Agency Ranking