EVALUATION-BASED SPEAKER CHANGE DETECTION EVALUATION METRICS

    公开(公告)号:US20240135934A1

    公开(公告)日:2024-04-25

    申请号:US18483492

    申请日:2023-10-09

    Applicant: Google LLC

    CPC classification number: G10L17/06 G10L17/02 G10L17/04

    Abstract: A method includes obtaining a multi-utterance training sample that includes audio data characterizing utterances spoken by two or more different speakers and obtaining ground-truth speaker change intervals indicating time intervals in the audio data where speaker changes among the two or more different speakers occur. The method also includes processing the audio data to generate a sequence of predicted speaker change tokens using a sequence transduction model. For each corresponding predicted speaker change token, the method includes labeling the corresponding predicted speaker change token as correct when the predicted speaker change token overlaps with one of the ground-truth speaker change intervals. The method also includes determining a precision metric of the sequence transduction model based on a number of the predicted speaker change tokens labeled as correct and a total number of the predicted speaker change tokens in the sequence of predicted speaker change tokens.

    ATTENTIVE SCORING FUNCTION FOR SPEAKER IDENTIFICATION

    公开(公告)号:US20240029742A1

    公开(公告)日:2024-01-25

    申请号:US18479615

    申请日:2023-10-02

    Applicant: Google LLC

    CPC classification number: G10L17/06 G06F16/245 G06N3/08 G10L17/04 G10L17/18

    Abstract: A speaker verification method includes receiving audio data corresponding to an utterance, processing the audio data to generate a reference attentive d-vector representing voice characteristics of the utterance, the evaluation ad-vector includes ne style classes each including a respective value vector concatenated with a corresponding routing vector. The method also includes generating using a self-attention mechanism, at least one multi-condition attention score that indicates a likelihood that the evaluation ad-vector matches a respective reference ad-vector associated with a respective user. The method also includes identifying the speaker of the utterance as the respective user associated with the respective reference ad-vector based on the multi-condition attention score.

    Speaker Verification with Multitask Speech Models

    公开(公告)号:US20230260521A1

    公开(公告)日:2023-08-17

    申请号:US18167815

    申请日:2023-02-10

    Applicant: Google LLC

    CPC classification number: G10L17/18 G10L17/04 G10L17/06

    Abstract: A method includes obtaining a speaker identification (SID) model trained to predict speaker embeddings from utterances spoken by different speakers, the SID model includes a trained audio encoder and a trained SID head. The method also includes receiving a plurality of synthetic speech detection (SSD) training utterances that include a set of human-originated speech samples and a set of synthetic speech samples. The method also includes training, using the trained audio encoder, a SSD head on the SSD training utterances to learn to detect the presence of synthetic speech in audio encodings encoded by the trained audio encoder. The operations also include providing, for execution on a computing device, a multitask neural network model for performing both SID tasks and SSD tasks on input audio data in parallel.

    Fully supervised speaker diarization

    公开(公告)号:US11688404B2

    公开(公告)日:2023-06-27

    申请号:US17303283

    申请日:2021-05-26

    Applicant: Google LLC

    Abstract: A method includes receiving an utterance of speech and segmenting the utterance of speech into a plurality of segments. For each segment of the utterance of speech, the method also includes extracting a speaker=discriminative embedding from the segment and predicting a probability distribution over possible speakers for the segment using a probabilistic generative model configured to receive the extracted speaker-discriminative embedding as a feature input. The probabilistic generative model trained on a corpus of training speech utterances each segmented into a plurality of training segments. Each training segment including a corresponding speaker-discriminative embedding and a corresponding speaker label. The method also includes assigning a speaker label to each segment of the utterance of speech based on the probability distribution over possible speakers for the corresponding segment.

    TEXT INDEPENDENT SPEAKER RECOGNITION

    公开(公告)号:US20230113617A1

    公开(公告)日:2023-04-13

    申请号:US18078476

    申请日:2022-12-09

    Applicant: GOOGLE LLC

    Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.

    Text independent speaker recognition

    公开(公告)号:US11527235B2

    公开(公告)日:2022-12-13

    申请号:US17046994

    申请日:2019-12-02

    Applicant: Google LLC

    Abstract: Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.

    TARGETED VOICE SEPARATION BY SPEAKER CONDITIONED ON SPECTROGRAM MASKING

    公开(公告)号:US20220122611A1

    公开(公告)日:2022-04-21

    申请号:US17567590

    申请日:2022-01-03

    Applicant: GOOGLE LLC

    Abstract: Techniques are disclosed that enable processing of audio data to generate one or more refined versions of audio data, where each of the refined versions of audio data isolate one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by processing a spectrogram representation of the audio data (generated by processing the audio data with a frequency transformation) using a mask generated by processing the spectrogram of the audio data and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using an inverse of the frequency transformation to generate the refined audio data.

    Targeted voice separation by speaker conditioned on spectrogram masking

    公开(公告)号:US11217254B2

    公开(公告)日:2022-01-04

    申请号:US16598172

    申请日:2019-10-10

    Applicant: Google LLC

    Abstract: Techniques are disclosed that enable processing of audio data to generate one or more refined versions of audio data, where each of the refined versions of audio data isolate one or more utterances of a single respective human speaker. Various implementations generate a refined version of audio data that isolates utterance(s) of a single human speaker by processing a spectrogram representation of the audio data (generated by processing the audio data with a frequency transformation) using a mask generated by processing the spectrogram of the audio data and a speaker embedding for the single human speaker using a trained voice filter model. Output generated over the trained voice filter model is processed using an inverse of the frequency transformation to generate the refined audio data.

Patent Agency Ranking