-
公开(公告)号:US20190287528A1
公开(公告)日:2019-09-19
申请号:US16362831
申请日:2019-03-25
Applicant: Google LLC
Inventor: Christopher Thaddeus Hughes , Ignacio Lopez Moreno , Aleksandar Kracun
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for contextual hotwords are disclosed. In one aspect, a method, during a boot process of a computing device, includes the actions of determining, by a computing device, a context associated with the computing device. The actions further include, based on the context associated with the computing device, determining a hotword. The actions further include, after determining the hotword, receiving audio data that corresponds to an utterance. The actions further include determining that the audio data includes the hotword. The actions further include, in response to determining that the audio data includes the hotword, performing an operation associated with the hotword.
-
公开(公告)号:US10276161B2
公开(公告)日:2019-04-30
申请号:US15391358
申请日:2016-12-27
Applicant: Google LLC
Inventor: Christopher Thaddeus Hughes , Ignacio Lopez Moreno , Aleksandar Kracun
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for contextual hotwords are disclosed. In one aspect, a method, during a boot process of a computing device, includes the actions of determining, by a computing device, a context associated with the computing device. The actions further include, based on the context associated with the computing device, determining a hotword. The actions further include, after determining the hotword, receiving audio data that corresponds to an utterance. The actions further include determining that the audio data includes the hotword. The actions further include, in response to determining that the audio data includes the hotword, performing an operation associated with the hotword.
-
公开(公告)号:US20180315430A1
公开(公告)日:2018-11-01
申请号:US15966667
申请日:2018-04-30
Applicant: Google LLC
Inventor: Georg Heigold , Samuel Bengio , Ignacio Lopez Moreno
Abstract: This document generally describes systems, methods, devices, and other techniques related to speaker verification, including (i) training a neural network for a speaker verification model, (ii) enrolling users at a client device, and (iii) verifying identities of users based on characteristics of the users' voices. Some implementations include a computer-implemented method. The method can include receiving, at a computing device, data that characterizes an utterance of a user of the computing device. A speaker representation can be generated, at the computing device, for the utterance using a neural network on the computing device. The neural network can be trained based on a plurality of training samples that each: (i) include data that characterizes a first utterance and data that characterizes one or more second utterances, and (ii) are labeled as a matching speakers sample or a non-matching speakers sample.
-
公开(公告)号:US20250095630A1
公开(公告)日:2025-03-20
申请号:US18966088
申请日:2024-12-02
Applicant: Google LLC
Inventor: Ye Jia , Zhifeng Chen , Yonghui Wu , Jonathan Shen , Ruoming Pang , Ron J. Weiss , Ignacio Lopez Moreno , Fei Ren , Yu Zhang , Quan Wang , Patrick An Phu Nguyen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.
-
公开(公告)号:US12254891B2
公开(公告)日:2025-03-18
申请号:US17619648
申请日:2019-10-10
Applicant: GOOGLE LLC
Inventor: Quan Wang , Ignacio Lopez Moreno , Li Wan
IPC: G10L21/028 , G10L17/02 , G10L17/04 , G10L17/18 , G10L21/0232
Abstract: Processing of acoustic features of audio data to generate one or more revised versions of the acoustic features, where each of the revised versions of the acoustic features isolates one or more utterances of a single respective human speaker. Various implementations generate the acoustic features by processing audio data using portion(s) of an automatic speech recognition system. Various implementations generate the revised acoustic features by processing the acoustic features using a mask generated by processing the acoustic features and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using the automatic speech recognition system to generate a predicted text representation of the utterance(s) of the single human speaker without reconstructing the audio data.
-
公开(公告)号:US12254876B2
公开(公告)日:2025-03-18
申请号:US18609542
申请日:2024-03-19
Applicant: Google LLC
IPC: G10L15/00 , G06F3/16 , G10L15/20 , G10L15/22 , G10L17/06 , G10L21/034 , G10L25/84 , H03G3/30 , G10L15/26 , G10L17/00
Abstract: The technology described in this document can be embodied in a computer-implemented method that includes receiving, at a processing system, a first signal including an output of a speaker device and an additional audio signal. The method also includes determining, by the processing system, based at least in part on a model trained to identify the output of the speaker device, that the additional audio signal corresponds to an utterance of a user. The method further includes initiating a reduction in an audio output level of the speaker device based on determining that the additional audio signal corresponds to the utterance of the user.
-
公开(公告)号:US12148433B2
公开(公告)日:2024-11-19
申请号:US18485069
申请日:2023-10-11
Applicant: Google LLC
Inventor: Georg Heigold , Samuel Bengio , Ignacio Lopez Moreno
Abstract: This document generally describes systems, methods, devices, and other techniques related to speaker verification, including (i) training a neural network for a speaker verification model, (ii) enrolling users at a client device, and (iii) verifying identities of users based on characteristics of the users' voices. Some implementations include a computer-implemented method. The method can include receiving, at a computing device, data that characterizes an utterance of a user of the computing device. A speaker representation can be generated, at the computing device, for the utterance using a neural network on the computing device. The neural network can be trained based on a plurality of training samples that each: (i) include data that characterizes a first utterance and data that characterizes one or more second utterances, and (ii) are labeled as a matching speakers sample or a non-matching speakers sample.
-
公开(公告)号:US12046233B2
公开(公告)日:2024-07-23
申请号:US18361408
申请日:2023-07-28
Applicant: GOOGLE LLC
Inventor: Pu-sen Chao , Diego Melendo Casado , Ignacio Lopez Moreno , William Zhang
CPC classification number: G10L15/197 , G10L13/00 , G10L15/005 , G10L15/08 , G10L15/14 , G10L15/1822 , G10L15/22 , G10L15/30 , G10L2015/088 , G10L2015/223 , G10L2015/228
Abstract: Determining a language for speech recognition of a spoken utterance received via an automated assistant interface for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Implementations determine a user profile that corresponds to audio data that captures a spoken utterance, and utilize language(s), and optionally corresponding probabilities, assigned to the user profile in determining a language for speech recognition of the spoken utterance. Some implementations select only a subset of languages, assigned to the user profile, to utilize in speech recognition of a given spoken utterance of the user. Some implementations perform speech recognition in each of multiple languages assigned to the user profile, and utilize criteria to select only one of the speech recognitions as appropriate for generating and providing content that is responsive to the spoken utterance.
-
公开(公告)号:US20240221737A1
公开(公告)日:2024-07-04
申请号:US18609542
申请日:2024-03-19
Applicant: Google LLC
IPC: G10L15/20 , G06F3/16 , G10L15/22 , G10L15/26 , G10L17/00 , G10L17/06 , G10L21/034 , G10L25/84 , H03G3/30
CPC classification number: G10L15/20 , G06F3/165 , G06F3/167 , G10L15/222 , G10L17/06 , G10L21/034 , G10L25/84 , H03G3/3005 , G10L15/26 , G10L17/00
Abstract: The technology described in this document can be embodied in a computer-implemented method that includes receiving, at a processing system, a first signal including an output of a speaker device and an additional audio signal. The method also includes determining, by the processing system, based at least in part on a model trained to identify the output of the speaker device, that the additional audio signal corresponds to an utterance of a user. The method further includes initiating a reduction in an audio output level of the speaker device based on determining that the additional audio signal corresponds to the utterance of the user.
-
公开(公告)号:US20240203426A1
公开(公告)日:2024-06-20
申请号:US18594833
申请日:2024-03-04
Applicant: GOOGLE LLC
Inventor: Quan Wang , Prashant Sridhar , Ignacio Lopez Moreno , Hannah Muckenhim
Abstract: Techniques are disclosed that enable processing of audio data to generate one or more refined versions of audio data, where each of the refined versions of audio data isolate one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by processing a spectrogram representation of the audio data (generated by processing the audio data with a frequency transformation) using a mask generated by processing the spectrogram of the audio data and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using an inverse of the frequency transformation to generate the refined audio data.
-
-
-
-
-
-
-
-
-