CORRECTING SPEECH MISRECOGNITION OF SPOKEN UTTERANCES

    公开(公告)号:US20230059469A1

    公开(公告)日:2023-02-23

    申请号:US17982834

    申请日:2022-11-08

    Applicant: GOOGLE LLC

    Abstract: Implementations can receive audio data corresponding to a spoken utterance of a user, process the audio data to generate a plurality of speech hypotheses, determine an action to be performed by an automated assistant based on the speech hypotheses, and cause the computing device to render an indication of the action. In response to the computing device rendering the indication, implementations can receive additional audio data corresponding to an additional spoken utterance of the user, process the additional audio data to determine that a portion of the spoken utterance is similar to an additional portion of the additional spoken utterance, supplant the action with an alternate action, and cause the automated assistant to initiate performance of the alternate action. Some implementations can determine whether to render the indication of the action based on a confidence level associated with the action.

    Contextual suppression of assistant command(s)

    公开(公告)号:US11557293B2

    公开(公告)日:2023-01-17

    申请号:US17321994

    申请日:2021-05-17

    Applicant: GOOGLE LLC

    Abstract: Some implementations process, using warm word model(s), a stream of audio data to determine a portion of the audio data that corresponds to particular word(s) and/or phrase(s) (e.g., a warm word) associated with an assistant command, process, using an automatic speech recognition (ASR) model, a preamble portion of the audio data (e.g., that precedes the warm word) and/or a postamble portion of the audio data (e.g., that follows the warm word) to generate ASR output, and determine, based on processing the ASR output, whether a user intended the assistant command to be performed. Additional or alternative implementations can process the stream of audio data using a speaker identification (SID) model to determine whether the audio data is sufficient to identify the user that provided a spoken utterance captured in the stream of audio data, and determine if that user is authorized to cause performance of the assistant command.

    Inferring semantic label(s) for assistant device(s) based on device-specific signal(s)

    公开(公告)号:US11514109B2

    公开(公告)日:2022-11-29

    申请号:US17083613

    申请日:2020-10-29

    Applicant: Google LLC

    Abstract: Implementations can identify a given assistant device from among a plurality of assistant devices in an ecosystem, obtain device-specific signal(s) that are generated by the given assistant device, process the device-specific signal(s) to generate candidate semantic label(s) for the given assistant device, select a given semantic label for the given semantic device from among the candidate semantic label(s), and assigning, in a device topology representation of the ecosystem, the given semantic label to the given assistant device. Implementations can optionally receive a spoken utterance that includes a query or command at the assistant device(s), determine a semantic property of the query or command matches the given semantic label to the given assistant device, and cause the given assistant device to satisfy the query or command.

    DETECTING AND HANDLING DRIVING EVENT SOUNDS DURING A NAVIGATION SESSION

    公开(公告)号:US20220355814A1

    公开(公告)日:2022-11-10

    申请号:US17273673

    申请日:2020-11-18

    Applicant: GOOGLE LLC

    Abstract: To identify driving event sounds during navigation, a client device in a vehicle provides a set of navigation directions for traversing from a starting location to a destination location along a route. During navigation to the destination location, the client device identifies audio that includes a driving event sound from within the vehicle or an area surrounding the vehicle. In response to determining that the audio includes the driving event sound, the client device determines whether the driving event sound is artificial. In response to determining that the driving event sound is artificial, the client device presents a notification to the driver indicating that the driving event sound is artificial or masks the driving event sound to prevent the driver from hearing the driving event sound.

    Voice filtering other speakers from calls and audio messages

    公开(公告)号:US11462219B2

    公开(公告)日:2022-10-04

    申请号:US17086296

    申请日:2020-10-30

    Applicant: Google LLC

    Abstract: A method includes receiving a first instance of raw audio data corresponding to a voice-based command and receiving a second instance of the raw audio data corresponding to an utterance of audible contents for an audio-based communication spoken by a user. When a voice filtering recognition routine determines to activate voice filtering for at least the voice of the user, the method also includes obtaining a respective speaker embedding of the user and processing, using the respective speaker embedding, the second instance of the raw audio data to generate enhanced audio data for the audio-based communication that isolates the utterance of the audible contents spoken by the user and excludes at least a portion of the one or more additional sounds that are not spoken by the user The method also includes executing.

    Training Keyword Spotters
    57.
    发明申请

    公开(公告)号:US20220262345A1

    公开(公告)日:2022-08-18

    申请号:US17662021

    申请日:2022-05-04

    Applicant: Google LLC

    Abstract: A method of training a custom hotword model includes receiving a first set of training audio samples. The method also includes generating, using a speech embedding model configured to receive the first set of training audio samples as input, a corresponding hotword embedding representative of a custom hotword for each training audio sample of the first set of training audio samples. The speech embedding model is pre-trained on a different set of training audio samples with a greater number of training audio samples than the first set of training audio samples The method further includes training the custom hotword model to detect a presence of the custom hotword in audio data. The custom hotword model is configured to receive, as input, each corresponding hotword embedding and to classify, as output, each corresponding hotword embedding as corresponding to the custom hotword.

    Providing traffic warnings to a user based on return journey

    公开(公告)号:US11415427B2

    公开(公告)日:2022-08-16

    申请号:US16852982

    申请日:2020-04-20

    Applicant: Google LLC

    Abstract: Systems and methods for generating return journey notifications include obtaining a request for navigational directions to a target destination. An outbound journey route from an initial location to the target destination can be determined, wherein the outbound journey route includes an estimated outbound journey time. A return journey route from the target destination to a return destination can be determined, wherein the return journey route includes an estimated return journey time. The outbound journey route and/or return journey route can be determined at least in part from one or more of current traffic conditions or historical traffic conditions. One or more notifications regarding the return journey route can be generated when comparing the estimated outbound journey time to the estimated return journey time results in a determination that one or more predetermined criteria are met.

    Speaker Dependent Follow Up Actions And Warm Words

    公开(公告)号:US20220189465A1

    公开(公告)日:2022-06-16

    申请号:US17117799

    申请日:2020-12-10

    Applicant: Google LLC

    Abstract: A method includes receiving audio data corresponding to an utterance spoken by a user that includes a command for a digital assistant to perform a long-standing operation, activating a set of one or more warm words associated with a respective action for controlling the long-standing operation, and associating the activated set of one or more warm words with only the user. While the digital assistant is performing the long-standing operation, the method includes receiving additional audio data corresponding to an additional utterance, identifying one of the warm words from the activated set of warm words, and performing speaker verification on the additional audio data. The method further includes performing the respective action associated with the identified one of the warm words for controlling the long-standing operation when the additional utterance was spoken by the same user that is associated with the activated set of one or more warm words.

Patent Agency Ranking