-
公开(公告)号:US20220301573A1
公开(公告)日:2022-09-22
申请号:US17619648
申请日:2019-10-10
Applicant: GOOGLE LLC
Inventor: Quan Wang , Ignacio Lopez Moreno , Li Wan
IPC: G10L21/028 , G10L17/04 , G10L17/18 , G10L17/02 , G10L21/0232
Abstract: Processing of acoustic features of audio data to generate one or more revised versions of the acoustic features, where each of the revised versions of the acoustic features isolates one or more utterances of a single respective human speaker. Various implementations generate the acoustic features by processing audio data using portion(s) of an automatic speech recognition system. Various implementations generate the revised acoustic features by processing the acoustic features using a mask generated by processing the acoustic features and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using the automatic speech recognition system to generate a predicted text representation of the utterance(s) of the single human speaker without reconstructing the audio data.
-
公开(公告)号:US20220122612A1
公开(公告)日:2022-04-21
申请号:US17071223
申请日:2020-10-15
Applicant: Google LLC
Inventor: Yeming Fang , Quan Wang , Pedro Jose Moreno Mengibar , Ignacio Lopez Moreno , Gang Feng , Fang Chu , Jin Shi , Jason William Pelecanos
IPC: G10L17/06
Abstract: A method of generating an accurate speaker representation for an audio sample includes receiving a first audio sample from a first speaker and a second audio sample from a second speaker. The method includes dividing a respective audio sample into a plurality of audio slices. The method also includes, based on the plurality of slices, generating a set of candidate acoustic embeddings where each candidate acoustic embedding includes a vector representation of acoustic features. The method further includes removing a subset of the candidate acoustic embeddings from the set of candidate acoustic embeddings. The method additionally includes generating an aggregate acoustic embedding from the remaining candidate acoustic embeddings in the set of candidate acoustic embeddings after removing the subset of the candidate acoustic embeddings.
-
公开(公告)号:US11031017B2
公开(公告)日:2021-06-08
申请号:US16242541
申请日:2019-01-08
Applicant: Google LLC
Inventor: Chong Wang , Aonan Zhang , Quan Wang , Zhenyao Zhu
Abstract: A method includes receiving an utterance of speech and segmenting the utterance of speech into a plurality of segments. For each segment of the utterance of speech, the method also includes extracting a speaker-discriminative embedding from the segment and predicting a probability distribution over possible speakers for the segment using a probabilistic generative model configured to receive the extracted speaker-discriminative embedding as a feature input. The probabilistic generative model trained on a corpus of training speech utterances each segmented into a plurality of training segments. Each training segment including a corresponding speaker-discriminative embedding and a corresponding speaker label. The method also includes assigning a speaker label to each segment of the utterance of speech based on the probability distribution over possible speakers for the corresponding segment.
-
公开(公告)号:US20200152207A1
公开(公告)日:2020-05-14
申请号:US16617219
申请日:2019-04-15
Applicant: Google LLC
Inventor: Quan Wang , Yash Sheth , Ignacio Lopez Moreno , Li Wan
Abstract: Techniques are described for training and/or utilizing an end-to-end speaker diarization model. In various implementations, the model is a recurrent neural network (RNN) model, such as an RNN model that includes at least one memory layer, such as a long short-term memory (LSTM) layer. Audio features of audio data can be applied as input to an end-to-end speaker diarization model trained according to implementations disclosed herein, and the model utilized to process the audio features to generate, as direct output over the model, speaker diarization results. Further, the end-to-end speaker diarization model can be a sequence-to-sequence model, where the sequence can have variable length. Accordingly, the model can be utilized to generate speaker diarization results for any of various length audio segments.
-
公开(公告)号:US20180277124A1
公开(公告)日:2018-09-27
申请号:US15995480
申请日:2018-06-01
Applicant: Google LLC
Inventor: Ignacio Lopez Moreno , Li Wan , Quan Wang
Abstract: Methods, systems, apparatus, including computer programs encoded on computer storage medium, to facilitate language independent-speaker verification. In one aspect, a method includes actions of receiving, by a user device, audio data representing an utterance of a user. Other actions may include providing, to a neural network stored on the user device, input data derived from the audio data and a language identifier. The neural network may be trained using speech data representing speech in different languages or dialects. The method may include additional actions of generating, based on output of the neural network, a speaker representation and determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user. The method may provide the user with access to the user device based on determining that the utterance is an utterance of the user.
-
公开(公告)号:US20240363122A1
公开(公告)日:2024-10-31
申请号:US18765108
申请日:2024-07-05
Applicant: GOOGLE LLC
Inventor: Rajeev Rikhye , Quan Wang , Yanzhang He , Qiao Liang , Ian C. McGraw
IPC: G10L17/24 , G10L15/26 , G10L17/06 , G10L21/028
CPC classification number: G10L17/24 , G10L15/26 , G10L17/06 , G10L21/028
Abstract: Techniques disclosed herein are directed towards streaming keyphrase detection which can be customized to detect one or more particular keyphrases, without requiring retraining of any model(s) for those particular keyphrase(s). Many implementations include processing audio data using a speaker separation model to generate separated audio data which isolates an utterance spoken by a human speaker from one or more additional sounds not spoken by the human speaker, and processing the separated audio data using a text independent speaker identification model to determine whether a verified and/or registered user spoke a spoken utterance captured in the audio data. Various implementations include processing the audio data and/or the separated audio data using an automatic speech recognition model to generate a text representation of the utterance. Additionally or alternatively, the text representation of the utterance can be processed to determine whether at least a portion of the text representation of the utterance captures a particular keyphrase. When the system determines the registered and/or verified user spoke the utterance and the system determines the text representation of the utterance captures the particular keyphrase, the system can cause a computing device to perform one or more actions corresponding to the particular keyphrase.
-
公开(公告)号:US20240331700A1
公开(公告)日:2024-10-03
申请号:US18191711
申请日:2023-03-28
Applicant: Google LLC
Inventor: Yang Yu , Quan Wang , Ignacio Lopez Moreno
Abstract: A method includes receiving a sequence of input audio frames and processing each corresponding input audio frame to determine a language ID event that indicates a predicted language. The method also includes obtaining speech recognition events each including a respective speech recognition result determined by a first language pack. Based on determining that the utterance includes a language switch from the first language to a second language, the method also includes loading a second language pack onto the client device and rewinding the input audio data buffered by an audio buffer to a time of the corresponding input audio frame associated with the language ID event that first indicated the second language as the predicted language. The method also includes emitting a first transcription and processing, using the second language pack loaded onto the client device, the rewound buffered audio data to generate a second transcription.
-
公开(公告)号:US20240304181A1
公开(公告)日:2024-09-12
申请号:US18598523
申请日:2024-03-07
Applicant: Google LLC
Inventor: Guru Prakash Arumugam , Shuo-yiin Chang , Shaan Jagdeep Patrick Bijwadia , Weiran Wang , Quan Wang , Rohit Prakash Prabhavalkar , Tara N. Sainath
IPC: G10L15/06
CPC classification number: G10L15/063
Abstract: A method includes receiving a plurality of training samples spanning multiple different domains. Each corresponding training sample includes audio data characterizing an utterance paired with a corresponding transcription of the utterance. The method also includes re-labeling each corresponding training sample of the plurality of training samples by annotating the corresponding transcription of the utterance with one or more speaker tags. Each speaker tag indicates a respective segment of the transcription for speech that was spoken by a particular type of speaker. The method also includes training a multi-domain speech recognition model on the re-labeled training samples to teach the multi-domain speech recognition model to learn to share parameters for recognizing speech across each of the different multiple different domains.
-
公开(公告)号:US12033641B2
公开(公告)日:2024-07-09
申请号:US18103324
申请日:2023-01-30
Applicant: Google LLC
Inventor: Rajeev Rikhye , Quan Wang , Yanzhang He , Qiao Liang , Ian C. McGraw
IPC: G10L17/24 , G10L15/26 , G10L17/06 , G10L21/028
CPC classification number: G10L17/24 , G10L15/26 , G10L17/06 , G10L21/028
Abstract: Techniques disclosed herein are directed towards streaming keyphrase detection which can be customized to detect one or more particular keyphrases, without requiring retraining of any model(s) for those particular keyphrase(s). Many implementations include processing audio data using a speaker separation model to generate separated audio data which isolates an utterance spoken by a human speaker from one or more additional sounds not spoken by the human speaker, and processing the separated audio data using a text independent speaker identification model to determine whether a verified and/or registered user spoke a spoken utterance captured in the audio data. Various implementations include processing the audio data and/or the separated audio data using an automatic speech recognition model to generate a text representation of the utterance. Additionally or alternatively, the text representation of the utterance can be processed to determine whether at least a portion of the text representation of the utterance captures a particular keyphrase. When the system determines the registered and/or verified user spoke the utterance and the system determines the text representation of the utterance captures the particular keyphrase, the system can cause a computing device to perform one or more actions corresponding to the particular keyphrase.
-
公开(公告)号:US20240203400A1
公开(公告)日:2024-06-20
申请号:US18394632
申请日:2023-12-22
Applicant: GOOGLE LLC
Inventor: Ignacio Lopez Moreno , Quan Wang , Jason Pelecanos , Li Wan , Alexander Gruenstein , Hakan Erdogan
CPC classification number: G10L15/063 , G10L15/07 , G10L15/20 , G10L17/04 , G10L17/20 , G10L21/0208 , G10L2015/088
Abstract: Implementations relate to an automated assistant that can bypass invocation phrase detection when an estimation of device-to-device distance satisfies a distance threshold. The estimation of distance can be performed for a set of devices, such as a computerized watch and a cellular phone, and/or any other combination of devices. The devices can communicate ultrasonic signals between each other, and the estimated distance can be determined based on when the ultrasonic signals are sent and/or received by each respective device. When an estimated distance satisfies the distance threshold, the automated assistant can operate as if the user is holding onto their cellular phone while wearing their computerized watch. This scenario can indicate that the user may be intending to hold their device to interact with the automated assistant and, based on this indication, the automated assistant can temporarily bypass invocation phrase detection (e.g., invoke the automated assistant).
-
-
-
-
-
-
-
-
-