CROSS-LINGUAL SPEAKER RECOGNITION

    公开(公告)号:US20230137652A1

    公开(公告)日:2023-05-04

    申请号:US17977521

    申请日:2022-10-31

    IPC分类号: G10L17/04 G10L17/10

    摘要: Disclosed are systems and methods including computing-processes executing machine-learning architectures for voice biometrics, in which the machine-learning architecture implements one or more language compensation functions. Embodiments include an embedding extraction engine (sometimes referred to as an “embedding extractor”) that extracts speaker embeddings and determines a speaker similarity score for determine or verifying the likelihood that speakers in different audio signals are the same speaker. The machine-learning architecture further includes a multi-class language classifier that determines a language likelihood score that indicates the likelihood that a particular audio signal includes a spoken language. The features and functions of the machine-learning architecture described herein may implement the various language compensation techniques to provide more accurate speaker recognition results, regardless of the language spoken by the speaker.

    PASSIVE AND CONTINUOUS MULTI-SPEAKER VOICE BIOMETRICS

    公开(公告)号:US20210326421A1

    公开(公告)日:2021-10-21

    申请号:US17231672

    申请日:2021-04-15

    摘要: Embodiments described herein provide for a voice biometrics system execute machine-learning architectures capable of passive, active, continuous, or static operations, or a combination thereof. Systems passively and/or continuously, in some cases in addition to actively and/or statically, enrolling speakers as the speakers speak into or around an edge device (e.g., car, television, radio, phone). The system identifies users on the fly without requiring a new speaker to mirror prompted utterances for reconfiguring operations. The system manages speaker profiles as speakers provide utterances to the system. Machine-learning architectures implement a passive and continuous voice biometrics system, possibly without knowledge of speaker identities. The system creates identities in an unsupervised manner, sometimes passively enrolling and recognizing known or unknown speakers. The system offers personalization and security across a wide range of applications, including media content for over-the-top services and IoT devices (e.g., personal assistants, vehicles), and call centers.

    SPEAKER SPECIFIC SPEECH ENHANCEMENT

    公开(公告)号:US20220084509A1

    公开(公告)日:2022-03-17

    申请号:US17475226

    申请日:2021-09-14

    IPC分类号: G10L15/16 G10L17/22

    摘要: Embodiments described herein provide for a machine-learning architecture system that enhances the speech audio of a user-defined target speaker by suppressing interfering speakers, as well as background noise and reverberations. The machine-learning architecture includes a speech separation engine for separating the speech signal of a target speaker from a mixture of multiple speakers' speech, and a noise suppression engine for suppressing various types of noise in the input audio signal. The speaker-specific speech enhancement architecture performs speaker mixture separation and background noise suppression to enhance the perceptual quality of the speech audio. The output of the machine-learning architecture is an enhanced audio signal improving the voice quality of a target speaker on a single-channel audio input containing a mixture of speaker speech signals and various types of noise.

    CROSS-CHANNEL ENROLLMENT AND AUTHENTICATION OF VOICE BIOMETRICS

    公开(公告)号:US20210241776A1

    公开(公告)日:2021-08-05

    申请号:US17165180

    申请日:2021-02-02

    IPC分类号: G10L17/04 G10L17/18 G06N3/04

    摘要: Embodiments described herein provide for systems and methods for voice-based cross-channel enrollment and authentication. The systems control for and mitigate against variations in audio signals received across any number of communications channels by training and employing a neural network architecture comprising a speaker verification neural network and a bandwidth expansion neural network. The bandwidth expansion neural network is trained on narrowband audio signals to produce and generate estimated wideband audio signals corresponding to the narrowband audio signals. These estimated wideband audio signals may be fed into one or more downstream applications, such as the speaker verification neural network or embedding extraction neural network. The speaker verification neural network can then compare and score inbound embeddings for a current call against enrolled embeddings, regardless of the channel used to receive the inbound signal or enrollment signal.