-
公开(公告)号:US11468900B2
公开(公告)日:2022-10-11
申请号:US17071223
申请日:2020-10-15
Applicant: Google LLC
Inventor: Yeming Fang , Quan Wang , Pedro Jose Moreno Mengibar , Ignacio Lopez Moreno , Gang Feng , Fang Chu , Jin Shi , Jason William Pelecanos
Abstract: A method of generating an accurate speaker representation for an audio sample includes receiving a first audio sample from a first speaker and a second audio sample from a second speaker. The method includes dividing a respective audio sample into a plurality of audio slices. The method also includes, based on the plurality of slices, generating a set of candidate acoustic embeddings where each candidate acoustic embedding includes a vector representation of acoustic features. The method further includes removing a subset of the candidate acoustic embeddings from the set of candidate acoustic embeddings. The method additionally includes generating an aggregate acoustic embedding from the remaining candidate acoustic embeddings in the set of candidate acoustic embeddings after removing the subset of the candidate acoustic embeddings.
-
公开(公告)号:US20220122614A1
公开(公告)日:2022-04-21
申请号:US17076743
申请日:2020-10-21
Applicant: Google LLC
Inventor: Jason Pelecanos , Pu-sen Chao , Yiling Huang , Quan Wang
Abstract: A method for evaluating a verification model includes receiving a first and a second set of verification results where each verification result indicates whether a primary model or an alternative model verifies an identity of a user as a registered user. The method further includes identifying each verification result in the first and second sets that includes a performance metric. The method also includes determining a first score of the primary model based on a number of the verification results identified in the first set that includes the performance metric and determining a second score of the alternative model based on a number of the verification results identified in the second set that includes the performance metric. The method further includes determining whether a verification capability of the alternative model is better than a verification capability of the primary model based on the first score and the second score.
-
公开(公告)号:US20210280197A1
公开(公告)日:2021-09-09
申请号:US17303283
申请日:2021-05-26
Applicant: Google LLC
Inventor: Chong Wang , Aonan Zhang , Quan Wang , Zhenyao Zhu
Abstract: A method includes receiving an utterance of speech and segmenting the utterance of speech into a plurality of segments. For each segment of the utterance of speech, the method also includes extracting a speaker=discriminative embedding from the segment and predicting a probability distribution over possible speakers for the segment using a probabilistic generative model configured to receive the extracted speaker-discriminative embedding as a feature input. The probabilistic generative model trained on a corpus of training speech utterances each segmented into a plurality of training segments. Each training segment including a corresponding speaker-discriminative embedding and a corresponding speaker label. The method also includes assigning a speaker label to each segment of the utterance of speech based on the probability distribution over possible speakers for the corresponding segment.
-
公开(公告)号:US20210043191A1
公开(公告)日:2021-02-11
申请号:US17046994
申请日:2019-12-02
Applicant: Google LLC
Inventor: Pu-sen Chao , Diego Melendo Casado , Ignacio Lopez Moreno , Quan Wang
Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.
-
55.
公开(公告)号:US20250029624A1
公开(公告)日:2025-01-23
申请号:US18906761
申请日:2024-10-04
Applicant: Google LLC
Inventor: Arun Narayanan , Tom O'malley , Quan Wang , Alex Park , James Walker , Nathan David Howard , Yanzhang He , Chung-Cheng Chiu
IPC: G10L21/0216 , G06N3/04 , G10L15/06 , G10L21/0208 , H04R3/04
Abstract: A method for automatic speech recognition using joint acoustic echo cancellation, speech enhancement, and voice separation includes receiving, at a contextual frontend processing model, input speech features corresponding to a target utterance. The method also includes receiving, at the contextual frontend processing model, at least one of a reference audio signal, a contextual noise signal including noise prior to the target utterance, or a speaker embedding including voice characteristics of a target speaker that spoke the target utterance. The method further includes processing, using the contextual frontend processing model, the input speech features and the at least one of the reference audio signal, the contextual noise signal, or the speaker embedding vector to generate enhanced speech features.
-
公开(公告)号:US12175963B2
公开(公告)日:2024-12-24
申请号:US18525475
申请日:2023-11-30
Applicant: Google LLC
Inventor: Ye Jia , Zhifeng Chen , Yonghui Wu , Jonathan Shen , Ruoming Pang , Ron J. Weiss , Ignacio Lopez Moreno , Fei Ren , Yu Zhang , Quan Wang , Patrick An Phu Nguyen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.
-
公开(公告)号:US20240079013A1
公开(公告)日:2024-03-07
申请号:US18506105
申请日:2023-11-09
Applicant: Google LLC
Inventor: Jason Pelecanos , Pu-sen Chao , Yiling Huang , Quan Wang
Abstract: A method for evaluating a verification model includes receiving a first and a second set of verification results where each verification result indicates whether a primary model or an alternative model verifies an identity of a user as a registered user. The method further includes identifying each verification result in the first and second sets that includes a performance metric. The method also includes determining a first score of the primary model based on a number of the verification results identified in the first set that includes the performance metric and determining a second score of the alternative model based on a number of the verification results identified in the second set that includes the performance metric. The method further includes determining whether a verification capability of the alternative model is better than a verification capability of the primary model based on the first score and the second score.
-
公开(公告)号:US11798562B2
公开(公告)日:2023-10-24
申请号:US17302926
申请日:2021-05-16
Applicant: Google LLC
Inventor: Ignacio Lopez Moreno , Quan Wang , Jason Pelecanos , Yiling Huang , Mert Saglam
IPC: G10L17/06 , G06N3/08 , G10L17/04 , G10L17/18 , G06F16/245
CPC classification number: G10L17/06 , G06F16/245 , G06N3/08 , G10L17/04 , G10L17/18
Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.
-
公开(公告)号:US11776563B2
公开(公告)日:2023-10-03
申请号:US18045168
申请日:2022-10-09
Applicant: Google LLC
Inventor: Quan Wang
CPC classification number: G10L25/93 , G10L13/00 , G10L15/063 , G10L21/02 , G10L21/0208 , G10L25/30 , G10L2021/02082
Abstract: A method includes receiving an overlapped audio signal that includes audio spoken by a speaker that overlaps a segment of synthesized playback audio. The method also includes encoding a sequence of characters that correspond to the synthesized playback audio into a text embedding representation. For each character in the sequence of characters, the method also includes generating a respective cancelation probability using the text embedding representation. The cancelation probability indicates a likelihood that the corresponding character is associated with the segment of the synthesized playback audio overlapped by the audio spoken by the speaker in the overlapped audio signal.
-
60.
公开(公告)号:US20230298609A1
公开(公告)日:2023-09-21
申请号:US18171368
申请日:2023-02-19
Applicant: Google LLC
Inventor: Tom O'Malley , Quan Wang , Arun Narayanan
IPC: G10L21/0208 , G10L15/06
CPC classification number: G10L21/0208 , G10L15/063 , G10L2021/02082
Abstract: A method for training a generalized automatic speech recognition model for joint acoustic echo cancellation, speech enhancement, and voice separation includes receiving a plurality of training utterances paired with corresponding training contextual signals. The training contextual signals include a training contextual noise signal including noise prior to the corresponding training utterance, a training reference audio signal, and a training speaker vector including voice characteristics of a target speaker that spoke the corresponding training utterance. The operations also include training, using a contextual signal dropout strategy, a contextual frontend processing model on the training utterances to learn how to predict enhanced speech features. Here, the contextual signal dropout strategy uses a predetermined probability to drop out each of the training contextual signals during training of the contextual frontend processing model.
-
-
-
-
-
-
-
-
-