-
71.
公开(公告)号:US20240194191A1
公开(公告)日:2024-06-13
申请号:US18389033
申请日:2023-11-13
Applicant: GOOGLE LLC
Inventor: Pu-sen Chao , Diego Melendo Casado , Ignacio Lopez Moreno
IPC: G10L15/14 , G06F3/16 , G10L15/00 , G10L15/02 , G10L15/08 , G10L15/18 , G10L15/183 , G10L15/22 , G10L15/30
CPC classification number: G10L15/14 , G06F3/167 , G10L15/005 , G10L15/02 , G10L15/1822 , G10L15/183 , G10L15/22 , G10L15/30 , G10L2015/088 , G10L2015/223 , G10L2015/228
Abstract: Implementations relate to determining a language for speech recognition of a spoken utterance, received via an automated assistant interface, for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Selection of a speech recognition model for a particular language can based on one or more interaction characteristics exhibited during a dialog session between a user and an automated assistant. Such interaction characteristics can include anticipated user input types, anticipated user input durations, a duration for monitoring for a user response, and/or an actual duration of a provided user response.
-
公开(公告)号:US20240038245A1
公开(公告)日:2024-02-01
申请号:US18485069
申请日:2023-10-11
Applicant: Google LLC
Inventor: Georg Heigold , Samuel Bengio , Ignacio Lopez Moreno
Abstract: This document generally describes systems, methods, devices, and other techniques related to speaker verification, including (i) training a neural network for a speaker verification model, (ii) enrolling users at a client device, and (iii) verifying identities of users based on characteristics of the users' voices. Some implementations include a computer-implemented method. The method can include receiving, at a computing device, data that characterizes an utterance of a user of the computing device. A speaker representation can be generated, at the computing device, for the utterance using a neural network on the computing device. The neural network can be trained based on a plurality of training samples that each: (i) include data that characterizes a first utterance and data that characterizes one or more second utterances, and (ii) are labeled as a matching speakers sample or a non-matching speakers sample.
-
公开(公告)号:US20230395069A1
公开(公告)日:2023-12-07
申请号:US18236302
申请日:2023-08-21
Applicant: GOOGLE LLC
Inventor: Ignacio Lopez Moreno , Luis Carlos Cobo Rus
CPC classification number: G10L15/20 , G10L15/30 , G10L15/02 , G10L15/22 , G10L21/0208
Abstract: Speaker diarization techniques that enable processing of audio data to generate one or more refined versions of the audio data, where each of the refined versions of the audio data isolates one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by generating a speaker embedding for the single human speaker, and processing the audio data using a trained generative model—and using the speaker embedding in determining activations for hidden layers of the trained generative model during the processing. Output is generated over the trained generative model based on the processing, and the output is the refined version of the audio data.
-
公开(公告)号:US11817084B2
公开(公告)日:2023-11-14
申请号:US16880647
申请日:2020-05-21
Applicant: GOOGLE LLC
Inventor: Pu-sen Chao , Diego Melendo Casado , Ignacio Lopez Moreno
IPC: G10L15/14 , G10L15/02 , G10L15/18 , G06F3/16 , G10L15/00 , G10L15/183 , G10L15/22 , G10L15/30 , G10L15/08
CPC classification number: G10L15/14 , G06F3/167 , G10L15/005 , G10L15/02 , G10L15/183 , G10L15/1822 , G10L15/22 , G10L15/30 , G10L2015/088 , G10L2015/223 , G10L2015/228
Abstract: The present disclosure relates generally to determining a language for speech recognition of a spoken utterance, received via an automated assistant interface, for interacting with an automated assistant. The system can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Selection of a speech recognition model for a particular language can based on one or more interaction characteristics exhibited during a dialog session between a user and an automated assistant. Such interaction characteristics can include anticipated user input types, anticipated user input durations, a duration for monitoring for a user response, and/or an actual duration of a provided user response.
-
公开(公告)号:US11798562B2
公开(公告)日:2023-10-24
申请号:US17302926
申请日:2021-05-16
Applicant: Google LLC
Inventor: Ignacio Lopez Moreno , Quan Wang , Jason Pelecanos , Yiling Huang , Mert Saglam
IPC: G10L17/06 , G06N3/08 , G10L17/04 , G10L17/18 , G06F16/245
CPC classification number: G10L17/06 , G06F16/245 , G06N3/08 , G10L17/04 , G10L17/18
Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.
-
公开(公告)号:US11798541B2
公开(公告)日:2023-10-24
申请号:US17099367
申请日:2020-11-16
Applicant: Google LLC
Inventor: Pu-sen Chao , Diego Melendo Casado , Ignacio Lopez Moreno
CPC classification number: G10L15/197 , G10L13/00 , G10L15/005 , G10L15/08 , G10L15/14 , G10L15/1822 , G10L15/22 , G10L15/30 , G10L2015/088 , G10L2015/223 , G10L2015/228
Abstract: Determining a language for speech recognition of a spoken utterance received via an automated assistant interface for interacting with an automated assistant. Implementations can enable multilingual interaction with the automated assistant, without necessitating a user explicitly designate a language to be utilized for each interaction. Implementations determine a user profile that corresponds to audio data that captures a spoken utterance, and utilize language(s), and optionally corresponding probabilities, assigned to the user profile in determining a language for speech recognition of the spoken utterance. Some implementations select only a subset of languages, assigned to the user profile, to utilize in speech recognition of a given spoken utterance of the user. Some implementations perform speech recognition in each of multiple languages assigned to the user profile, and utilize criteria to select only one of the speech recognitions as appropriate for generating and providing content that is responsive to the spoken utterance.
-
公开(公告)号:US20230335116A1
公开(公告)日:2023-10-19
申请号:US18210963
申请日:2023-06-16
Applicant: GOOGLE LLC
Inventor: Meltem Oktem , Taral Pradeep Joglekar , Fnu Heryandi , Pu-sen Chao , Ignacio Lopez Moreno , Salil Rajadhyaksha , Alexander H. Gruenstein , Diego Melendo Casado
CPC classification number: G10L15/08 , G06F16/636 , G06F21/32 , G06V40/10 , G10L15/07 , G10L15/22 , G10L17/00 , G10L17/06 , G10L2015/088 , G10L15/26
Abstract: In some implementations, processor(s) can receive an utterance from a speaker, and determine whether the speaker is a known user of a user device or not a known user of the user device. The user device can be shared by a plurality of known users. Further, the processor(s) can determine whether the utterance corresponds to a personal request or non-personal request. Moreover, and in response to determining that the speaker not a known user of the user device and in response to determining that the utterance corresponds to a non-personal request, the processor(s) can cause a response to the utterance to be provided for presentation to the speaker at the user device response to the utterance, or can cause an action to be performed by the user device responsive to the utterance.
-
公开(公告)号:US20220351713A1
公开(公告)日:2022-11-03
申请号:US17813361
申请日:2022-07-19
Applicant: Google LLC
Inventor: Ye Jia , Zhifeng Chen , Yonghui Wu , Jonathan Shen , Ruoming Pang , Ron J. Weiss , Ignacio Lopez Moreno , Fei Ren , Yu Zhang , Quan Wang , Patrick An Phu Nguyen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.
-
公开(公告)号:US11488575B2
公开(公告)日:2022-11-01
申请号:US17055951
申请日:2019-05-17
Applicant: Google LLC
Inventor: Ye Jia , Zhifeng Chen , Yonghui Wu , Jonathan Shen , Ruoming Pang , Ron J. Weiss , Ignacio Lopez Moreno , Fei Ren , Yu Zhang , Quan Wang , Patrick Nguyen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.
-
公开(公告)号:US20220328035A1
公开(公告)日:2022-10-13
申请号:US17846287
申请日:2022-06-22
Applicant: Google LLC
Inventor: Li Wan , Yang Yu , Prashant Sridhar , Ignacio Lopez Moreno , Quan Wang
IPC: G10L15/00
Abstract: Methods and systems for training and/or using a language selection model for use in determining a particular language of a spoken utterance captured in audio data. Features of the audio data can be processed using the trained language selection model to generate a predicted probability for each of N different languages, and a particular language selected based on the generated probabilities. Speech recognition results for the particular language can be utilized responsive to selecting the particular language of the spoken utterance. Many implementations are directed to training the language selection model utilizing tuple losses in lieu of traditional cross-entropy losses. Training the language selection model utilizing the tuple losses can result in more efficient training and/or can result in a more accurate and/or robust model—thereby mitigating erroneous language selections for spoken utterances.
-
-
-
-
-
-
-
-
-