-
公开(公告)号:US20240203426A1
公开(公告)日:2024-06-20
申请号:US18594833
申请日:2024-03-04
Applicant: GOOGLE LLC
Inventor: Quan Wang , Prashant Sridhar , Ignacio Lopez Moreno , Hannah Muckenhim
Abstract: Techniques are disclosed that enable processing of audio data to generate one or more refined versions of audio data, where each of the refined versions of audio data isolate one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by processing a spectrogram representation of the audio data (generated by processing the audio data with a frequency transformation) using a mask generated by processing the spectrogram of the audio data and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using an inverse of the frequency transformation to generate the refined audio data.
-
公开(公告)号:US11922951B2
公开(公告)日:2024-03-05
申请号:US17567590
申请日:2022-01-03
Applicant: GOOGLE LLC
Inventor: Quan Wang , Prashant Sridhar , Ignacio Lopez Moreno , Hannah Muckenhirn
Abstract: Techniques are disclosed that enable processing of audio data to generate one or more refined versions of audio data, where each of the refined versions of audio data isolate one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by processing a spectrogram representation of the audio data (generated by processing the audio data with a frequency transformation) using a mask generated by processing the spectrogram of the audio data and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using an inverse of the frequency transformation to generate the refined audio data.
-
公开(公告)号:US11646011B2
公开(公告)日:2023-05-09
申请号:US17846287
申请日:2022-06-22
Applicant: Google LLC
Inventor: Li Wan , Yang Yu , Prashant Sridhar , Ignacio Lopez Moreno , Quan Wang
IPC: G10L15/00
CPC classification number: G10L15/005
Abstract: Methods and systems for training and/or using a language selection model for use in determining a particular language of a spoken utterance captured in audio data. Features of the audio data can be processed using the trained language selection model to generate a predicted probability for each of N different languages, and a particular language selected based on the generated probabilities. Speech recognition results for the particular language can be utilized responsive to selecting the particular language of the spoken utterance. Many implementations are directed to training the language selection model utilizing tuple losses in lieu of traditional cross-entropy losses. Training the language selection model utilizing the tuple losses can result in more efficient training and/or can result in a more accurate and/or robust model—thereby mitigating erroneous language selections for spoken utterances.
-
公开(公告)号:US20200335083A1
公开(公告)日:2020-10-22
申请号:US16959037
申请日:2019-11-27
Applicant: Google LLC
Inventor: Li Wan , Yang Yu , Prashant Sridhar , Ignacio Lopez Moreno , Quan Wang
IPC: G10L15/00
Abstract: Methods and systems for training and/or using a language selection model for use in determining a particular language of a spoken utterance captured in audio data. Features of the audio data can be processed using the trained language selection model to generate a predicted probability for each of N different languages, and a particular language selected based on the generated probabilities. Speech recognition results for the particular language can be utilized responsive to selecting the particular language of the spoken utterance. Many implementations are directed to training the language selection model utilizing tuple losses in lieu of traditional cross-entropy losses. Training the language selection model utilizing the tuple losses can result in more efficient training and/or can result in a more accurate and/or robust model—thereby mitigating erroneous language selections for spoken utterances.
-
公开(公告)号:US20220122611A1
公开(公告)日:2022-04-21
申请号:US17567590
申请日:2022-01-03
Applicant: GOOGLE LLC
Inventor: Quan Wang , Prashant Sridhar , Ignacio Lopez Moreno , Hannah Muckenhirn
Abstract: Techniques are disclosed that enable processing of audio data to generate one or more refined versions of audio data, where each of the refined versions of audio data isolate one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by processing a spectrogram representation of the audio data (generated by processing the audio data with a frequency transformation) using a mask generated by processing the spectrogram of the audio data and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using an inverse of the frequency transformation to generate the refined audio data.
-
公开(公告)号:US11217254B2
公开(公告)日:2022-01-04
申请号:US16598172
申请日:2019-10-10
Applicant: Google LLC
Inventor: Quan Wang , Prashant Sridhar , Ignacio Lopez Moreno , Hannah Muckenhirn
Abstract: Techniques are disclosed that enable processing of audio data to generate one or more refined versions of audio data, where each of the refined versions of audio data isolate one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by processing a spectrogram representation of the audio data (generated by processing the audio data with a frequency transformation) using a mask generated by processing the spectrogram of the audio data and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using an inverse of the frequency transformation to generate the refined audio data.
-
公开(公告)号:US20220328035A1
公开(公告)日:2022-10-13
申请号:US17846287
申请日:2022-06-22
Applicant: Google LLC
Inventor: Li Wan , Yang Yu , Prashant Sridhar , Ignacio Lopez Moreno , Quan Wang
IPC: G10L15/00
Abstract: Methods and systems for training and/or using a language selection model for use in determining a particular language of a spoken utterance captured in audio data. Features of the audio data can be processed using the trained language selection model to generate a predicted probability for each of N different languages, and a particular language selected based on the generated probabilities. Speech recognition results for the particular language can be utilized responsive to selecting the particular language of the spoken utterance. Many implementations are directed to training the language selection model utilizing tuple losses in lieu of traditional cross-entropy losses. Training the language selection model utilizing the tuple losses can result in more efficient training and/or can result in a more accurate and/or robust model—thereby mitigating erroneous language selections for spoken utterances.
-
公开(公告)号:US11410641B2
公开(公告)日:2022-08-09
申请号:US16959037
申请日:2019-11-27
Applicant: Google LLC
Inventor: Li Wan , Yang Yu , Prashant Sridhar , Ignacio Lopez Moreno , Quan Wang
IPC: G10L15/00
Abstract: Methods and systems for training and/or using a language selection model for use in determining a particular language of a spoken utterance captured in audio data. Features of the audio data can be processed using the trained language selection model to generate a predicted probability for each of N different languages, and a particular language selected based on the generated probabilities. Speech recognition results for the particular language can be utilized responsive to selecting the particular language of the spoken utterance. Many implementations are directed to training the language selection model utilizing tuple losses in lieu of traditional cross-entropy losses. Training the language selection model utilizing the tuple losses can result in more efficient training and/or can result in a more accurate and/or robust model—thereby mitigating erroneous language selections for spoken utterances.
-
-
-
-
-
-
-