-
公开(公告)号:US11854533B2
公开(公告)日:2023-12-26
申请号:US17587424
申请日:2022-01-28
Applicant: GOOGLE LLC
Inventor: Ignacio Lopez Moreno , Quan Wang , Jason Pelecanos , Li Wan , Alexander Gruenstein , Hakan Erdogan
IPC: G10L15/16 , G10L15/06 , G10L15/07 , G10L15/20 , G10L17/04 , G10L17/20 , G10L21/0208 , G10L15/08
CPC classification number: G10L15/063 , G10L15/07 , G10L15/20 , G10L17/04 , G10L17/20 , G10L21/0208 , G10L2015/088
Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.
-
公开(公告)号:US11646011B2
公开(公告)日:2023-05-09
申请号:US17846287
申请日:2022-06-22
Applicant: Google LLC
Inventor: Li Wan , Yang Yu , Prashant Sridhar , Ignacio Lopez Moreno , Quan Wang
IPC: G10L15/00
CPC classification number: G10L15/005
Abstract: Methods and systems for training and/or using a language selection model for use in determining a particular language of a spoken utterance captured in audio data. Features of the audio data can be processed using the trained language selection model to generate a predicted probability for each of N different languages, and a particular language selected based on the generated probabilities. Speech recognition results for the particular language can be utilized responsive to selecting the particular language of the spoken utterance. Many implementations are directed to training the language selection model utilizing tuple losses in lieu of traditional cross-entropy losses. Training the language selection model utilizing the tuple losses can result in more efficient training and/or can result in a more accurate and/or robust model—thereby mitigating erroneous language selections for spoken utterances.
-
公开(公告)号:US11594230B2
公开(公告)日:2023-02-28
申请号:US17307704
申请日:2021-05-04
Applicant: Google LLC
Inventor: Ignacio Lopez Moreno , Li Wan , Quan Wang
Abstract: Methods, systems, apparatus, including computer programs encoded on computer storage medium, to facilitate language independent-speaker verification. In one aspect, a method includes actions of receiving, by a user device, audio data representing an utterance of a user. Other actions may include providing, to a neural network stored on the user device, input data derived from the audio data and a language identifier. The neural network may be trained using speech data representing speech in different languages or dialects. The method may include additional actions of generating, based on output of the neural network, a speaker representation and determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user. The method may provide the user with access to the user device based on determining that the utterance is an utterance of the user.
-
公开(公告)号:US20230015169A1
公开(公告)日:2023-01-19
申请号:US17933164
申请日:2022-09-19
Applicant: Google LLC
Inventor: Yeming Fang , Quan Wang , Pedro Jose Moreno Mengibar , Ignacio Lopez Moreno , Gang Feng , Fang Chu , Jin Shi , Jason William Pelecanos
IPC: G10L17/06
Abstract: A method of generating an accurate speaker representation for an audio sample includes receiving a first audio sample from a first speaker and a second audio sample from a second speaker. The method includes dividing a respective audio sample into a plurality of audio slices. The method also includes, based on the plurality of slices, generating a set of candidate acoustic embeddings where each candidate acoustic embedding includes a vector representation of acoustic features. The method further includes removing a subset of the candidate acoustic embeddings from the set of candidate acoustic embeddings. The method additionally includes generating an aggregate acoustic embedding from the remaining candidate acoustic embeddings in the set of candidate acoustic embeddings after removing the subset of the candidate acoustic embeddings.
-
公开(公告)号:US11545157B2
公开(公告)日:2023-01-03
申请号:US16617219
申请日:2019-04-15
Applicant: Google LLC
Inventor: Quan Wang , Yash Sheth , Ignacio Lopez Moreno , Li Wan
Abstract: Techniques are described for training and/or utilizing an end-to-end speaker diarization model. In various implementations, the model is a recurrent neural network (RNN) model, such as an RNN model that includes at least one memory layer, such as a long short-term memory (LSTM) layer. Audio features of audio data can be applied as input to an end-to-end speaker diarization model trained according to implementations disclosed herein, and the model utilized to process the audio features to generate, as direct output over the model, speaker diarization results. Further, the end-to-end speaker diarization model can be a sequence-to-sequence model, where the sequence can have variable length. Accordingly, the model can be utilized to generate speaker diarization results for any of various length audio segments.
-
公开(公告)号:US20220157298A1
公开(公告)日:2022-05-19
申请号:US17587424
申请日:2022-01-28
Applicant: GOOGLE LLC
Inventor: Ignacio Lopez Moreno , Quan Wang , Jason Pelecanos , Li Wan , Alexander Gruenstein , Hakan Erdogan
Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.
-
公开(公告)号:US20210312907A1
公开(公告)日:2021-10-07
申请号:US17251163
申请日:2019-12-04
Applicant: GOOGLE LLC
Inventor: Ignacio Lopez Moreno , Quan Wang , Jason Pelecanos , Li Wan , Alexander Gruenstein , Hakan Erdogan
Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.
-
38.
公开(公告)号:US20200335083A1
公开(公告)日:2020-10-22
申请号:US16959037
申请日:2019-11-27
Applicant: Google LLC
Inventor: Li Wan , Yang Yu , Prashant Sridhar , Ignacio Lopez Moreno , Quan Wang
IPC: G10L15/00
Abstract: Methods and systems for training and/or using a language selection model for use in determining a particular language of a spoken utterance captured in audio data. Features of the audio data can be processed using the trained language selection model to generate a predicted probability for each of N different languages, and a particular language selected based on the generated probabilities. Speech recognition results for the particular language can be utilized responsive to selecting the particular language of the spoken utterance. Many implementations are directed to training the language selection model utilizing tuple losses in lieu of traditional cross-entropy losses. Training the language selection model utilizing the tuple losses can result in more efficient training and/or can result in a more accurate and/or robust model—thereby mitigating erroneous language selections for spoken utterances.
-
公开(公告)号:US20250078840A1
公开(公告)日:2025-03-06
申请号:US18812338
申请日:2024-08-22
Applicant: Google LLC
Inventor: Pai Zhu , Beltrán Labrador Serrano , Guanlong Zhao , Angelo Alfredo Scorza Scarpati , Quan Wang , Alex Seungryong Park , Ignacio Lopez Moreno
Abstract: A method includes receiving audio data corresponding to an utterance spoken by a particular user and captured in streaming audio by a user device. The method also includes performing speaker identification on the audio data to identify an identity of the particular user that spoke the utterance. The method also includes obtaining a keyword detection model personalized for the particular user based on the identity of the particular user that spoke the utterance. The keyword detection model is conditioned on speaker characteristic information associated with the particular user to adapt the keyword detection model to detect a presence of a keyword in audio for the particular user. The method also includes determining that the utterance includes the keyword using the keyword detection model personalized for the particular user.
-
公开(公告)号:US12159622B2
公开(公告)日:2024-12-03
申请号:US18078476
申请日:2022-12-09
Applicant: GOOGLE LLC
Inventor: Pu-sen Chao , Diego Melendo Casado , Ignacio Lopez Moreno , Quan Wang
Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.
-
-
-
-
-
-
-
-
-