Neural Networks For Speaker Verification
    72.
    发明公开

    公开(公告)号:US20240038245A1

    公开(公告)日:2024-02-01

    申请号:US18485069

    申请日:2023-10-11

    Applicant: Google LLC

    CPC classification number: G10L17/18 G10L17/04 G10L17/02

    Abstract: This document generally describes systems, methods, devices, and other techniques related to speaker verification, including (i) training a neural network for a speaker verification model, (ii) enrolling users at a client device, and (iii) verifying identities of users based on characteristics of the users' voices. Some implementations include a computer-implemented method. The method can include receiving, at a computing device, data that characterizes an utterance of a user of the computing device. A speaker representation can be generated, at the computing device, for the utterance using a neural network on the computing device. The neural network can be trained based on a plurality of training samples that each: (i) include data that characterizes a first utterance and data that characterizes one or more second utterances, and (ii) are labeled as a matching speakers sample or a non-matching speakers sample.

    SPEAKER DIARIZATION USING SPEAKER EMBEDDING(S) AND TRAINED GENERATIVE MODEL

    公开(公告)号:US20230395069A1

    公开(公告)日:2023-12-07

    申请号:US18236302

    申请日:2023-08-21

    Applicant: GOOGLE LLC

    CPC classification number: G10L15/20 G10L15/30 G10L15/02 G10L15/22 G10L21/0208

    Abstract: Speaker diarization techniques that enable processing of audio data to generate one or more refined versions of the audio data, where each of the refined versions of the audio data isolates one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by generating a speaker embedding for the single human speaker, and processing the audio data using a trained generative model—and using the speaker embedding in determining activations for hidden layers of the trained generative model during the processing. Output is generated over the trained generative model based on the processing, and the output is the refined version of the audio data.

    Attentive scoring function for speaker identification

    公开(公告)号:US11798562B2

    公开(公告)日:2023-10-24

    申请号:US17302926

    申请日:2021-05-16

    Applicant: Google LLC

    CPC classification number: G10L17/06 G06F16/245 G06N3/08 G10L17/04 G10L17/18

    Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.

    TRAINING AND/OR USING A LANGUAGE SELECTION MODEL FOR AUTOMATICALLY DETERMINING LANGUAGE FOR SPEECH RECOGNITION OF SPOKEN UTTERANCE

    公开(公告)号:US20220328035A1

    公开(公告)日:2022-10-13

    申请号:US17846287

    申请日:2022-06-22

    Applicant: Google LLC

    Abstract: Methods and systems for training and/or using a language selection model for use in determining a particular language of a spoken utterance captured in audio data. Features of the audio data can be processed using the trained language selection model to generate a predicted probability for each of N different languages, and a particular language selected based on the generated probabilities. Speech recognition results for the particular language can be utilized responsive to selecting the particular language of the spoken utterance. Many implementations are directed to training the language selection model utilizing tuple losses in lieu of traditional cross-entropy losses. Training the language selection model utilizing the tuple losses can result in more efficient training and/or can result in a more accurate and/or robust model—thereby mitigating erroneous language selections for spoken utterances.

Patent Agency Ranking