-
公开(公告)号:US20240135934A1
公开(公告)日:2024-04-25
申请号:US18483492
申请日:2023-10-09
Applicant: Google LLC
Inventor: Guanlong Zhao , Quan Wang , Han Lu , Yiling Huang , Jason Pelecanos
Abstract: A method includes obtaining a multi-utterance training sample that includes audio data characterizing utterances spoken by two or more different speakers and obtaining ground-truth speaker change intervals indicating time intervals in the audio data where speaker changes among the two or more different speakers occur. The method also includes processing the audio data to generate a sequence of predicted speaker change tokens using a sequence transduction model. For each corresponding predicted speaker change token, the method includes labeling the corresponding predicted speaker change token as correct when the predicted speaker change token overlaps with one of the ground-truth speaker change intervals. The method also includes determining a precision metric of the sequence transduction model based on a number of the predicted speaker change tokens labeled as correct and a total number of the predicted speaker change tokens in the sequence of predicted speaker change tokens.
-
公开(公告)号:US20240029742A1
公开(公告)日:2024-01-25
申请号:US18479615
申请日:2023-10-02
Applicant: Google LLC
Inventor: Ignacio Lopez Moreno , Quan Wang , Jason Pelecanos , Yiling Huang , Mert Saglam
IPC: G10L17/06 , G06F16/245 , G06N3/08 , G10L17/04 , G10L17/18
CPC classification number: G10L17/06 , G06F16/245 , G06N3/08 , G10L17/04 , G10L17/18
Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.
-
公开(公告)号:US20230260521A1
公开(公告)日:2023-08-17
申请号:US18167815
申请日:2023-02-10
Applicant: Google LLC
Inventor: Alanna Foster Slocum , Yiling Huang , Shelly Bensal , Quan Wang
Abstract: A method includes obtaining a speaker identification (SID) model trained to predict speaker embeddings from utterances spoken by different speakers, the SID model includes a trained audio encoder and a trained SID head. The method also includes receiving a plurality of synthetic speech detection (SSD) training utterances that include a set of human-originated speech samples and a set of synthetic speech samples. The method also includes training, using the trained audio encoder, a SSD head on the SSD training utterances to learn to detect the presence of synthetic speech in audio encodings encoded by the trained audio encoder. The operations also include providing, for execution on a computing device, a multitask neural network model for performing both SID tasks and SSD tasks on input audio data in parallel.
-
公开(公告)号:US11688404B2
公开(公告)日:2023-06-27
申请号:US17303283
申请日:2021-05-26
Applicant: Google LLC
Inventor: Chong Wang , Aonan Zhang , Quan Wang , Zhenyao Zhu
CPC classification number: G10L17/04 , G10L15/04 , G10L15/075 , G10L15/26 , G10L17/00 , G10L17/02 , G10L17/18
Abstract: A method includes receiving an utterance of speech and segmenting the utterance of speech into a plurality of segments. For each segment of the utterance of speech, the method also includes extracting a speaker=discriminative embedding from the segment and predicting a probability distribution over possible speakers for the segment using a probabilistic generative model configured to receive the extracted speaker-discriminative embedding as a feature input. The probabilistic generative model trained on a corpus of training speech utterances each segmented into a plurality of training segments. Each training segment including a corresponding speaker-discriminative embedding and a corresponding speaker label. The method also includes assigning a speaker label to each segment of the utterance of speech based on the probability distribution over possible speakers for the corresponding segment.
-
公开(公告)号:US20230113617A1
公开(公告)日:2023-04-13
申请号:US18078476
申请日:2022-12-09
Applicant: GOOGLE LLC
Inventor: Pu-sen Chao , Diego Melendo Casado , Ignacio Lopez Moreno , Quan Wang
Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.
-
公开(公告)号:US11527235B2
公开(公告)日:2022-12-13
申请号:US17046994
申请日:2019-12-02
Applicant: Google LLC
Inventor: Pu-sen Chao , Diego Melendo Casado , Ignacio Lopez Moreno , Quan Wang
Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.
-
公开(公告)号:US20220122611A1
公开(公告)日:2022-04-21
申请号:US17567590
申请日:2022-01-03
Applicant: GOOGLE LLC
Inventor: Quan Wang , Prashant Sridhar , Ignacio Lopez Moreno , Hannah Muckenhirn
Abstract: Techniques are disclosed that enable processing of audio data to generate one or more refined versions of audio data, where each of the refined versions of audio data isolate one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by processing a spectrogram representation of the audio data (generated by processing the audio data with a frequency transformation) using a mask generated by processing the spectrogram of the audio data and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using an inverse of the frequency transformation to generate the refined audio data.
-
公开(公告)号:US11238847B2
公开(公告)日:2022-02-01
申请号:US17251163
申请日:2019-12-04
Applicant: GOOGLE LLC
Inventor: Ignacio Lopez Moreno , Quan Wang , Jason Pelecanos , Li Wan , Alexander Gruenstein , Hakan Erdogan
IPC: G10L17/00 , G10L15/06 , G10L15/07 , G10L15/20 , G10L17/04 , G10L17/20 , G10L21/0208 , G10L15/08
Abstract: Techniques disclosed herein enable training and/or utilizing speaker dependent (SD) speech models which are personalizable to any user of a client device. Various implementations include personalizing a SD speech model for a target user by processing, using the SD speech model, a speaker embedding corresponding to the target user along with an instance of audio data. The SD speech model can be personalized for an additional target user by processing, using the SD speech model, an additional speaker embedding, corresponding to the additional target user, along with another instance of audio data. Additional or alternative implementations include training the SD speech model based on a speaker independent speech model using teacher student learning.
-
公开(公告)号:US11217254B2
公开(公告)日:2022-01-04
申请号:US16598172
申请日:2019-10-10
Applicant: Google LLC
Inventor: Quan Wang , Prashant Sridhar , Ignacio Lopez Moreno , Hannah Muckenhirn
Abstract: Techniques are disclosed that enable processing of audio data to generate one or more refined versions of audio data, where each of the refined versions of audio data isolate one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by processing a spectrogram representation of the audio data (generated by processing the audio data with a frequency transformation) using a mask generated by processing the spectrogram of the audio data and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using an inverse of the frequency transformation to generate the refined audio data.
-
公开(公告)号:US20210217404A1
公开(公告)日:2021-07-15
申请号:US17055951
申请日:2019-05-17
Applicant: Google LLC
Inventor: Ye Jia , Zhifeng Chen , Yonghui Wu , Jonathan Shen , Ruoming Pang , Ron J. Weiss , Ignacio Lopez Moreno , Fei Ren , Yu Zhang , Quan Wang , Patrick Nguyen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.
-
-
-
-
-
-
-
-
-