-
公开(公告)号:US20230137652A1
公开(公告)日:2023-05-04
申请号:US17977521
申请日:2022-10-31
发明人: Elie KHOURY , Tianxiang CHEN , Avrosh KUMAR , Ganesh SIVARAMAN , Kedar PHATAK
摘要: Disclosed are systems and methods including computing-processes executing machine-learning architectures for voice biometrics, in which the machine-learning architecture implements one or more language compensation functions. Embodiments include an embedding extraction engine (sometimes referred to as an “embedding extractor”) that extracts speaker embeddings and determines a speaker similarity score for determine or verifying the likelihood that speakers in different audio signals are the same speaker. The machine-learning architecture further includes a multi-class language classifier that determines a language likelihood score that indicates the likelihood that a particular audio signal includes a spoken language. The features and functions of the machine-learning architecture described herein may implement the various language compensation techniques to provide more accurate speaker recognition results, regardless of the language spoken by the speaker.
-
公开(公告)号:US20240363123A1
公开(公告)日:2024-10-31
申请号:US18646310
申请日:2024-04-25
发明人: Elie KHOURY , Ganesh SIVARAMAN , Tianxiang CHEN , Nikolay GAUBITCH , David LOONEY , Amit GUPTA , Vijay BALASUBRAMANIYAN , Nicholas KLEIN , Anthony STANKUS
摘要: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. Embodiments include systems and methods for detecting fraudulent presentation attacks using multiple functional engines that implement various fraud-detection techniques, to produce calibrated scores and/or fused scores. A computer may, for example, evaluate the audio quality of speech signals within audio signals, where speech signals contain the speech portions having speaker utterances.
-
公开(公告)号:US20210110813A1
公开(公告)日:2021-04-15
申请号:US17066210
申请日:2020-10-08
发明人: Elie KHOURY , Ganesh SIVARAMAN , Tianxiang CHEN , Amruta VIDWANS
摘要: Described herein are systems and methods for improved audio analysis using a computer-executed neural network having one or more in-network data augmentation layers. The systems described herein help ease or avoid unwanted strain on computing resources by employing the data augmentation techniques within the layers of the neural network. The in-network data augmentation layers will produce various types of simulated audio data when the computer applies the neural network on an inputted audio signal during a training phase, enrollment phase, and/or testing phase. Subsequent layers of the neural network (e.g., convolutional layer, pooling layer, data augmentation layer) ingest the simulated audio data and the inputted audio signal and perform various operations.
-
公开(公告)号:US20240363124A1
公开(公告)日:2024-10-31
申请号:US18646431
申请日:2024-04-25
发明人: Elie KHOURY , Ganesh SIVARAMAN , Tianxiang CHEN , Nikolay GAUBITCH , David LOONEY , Amit GUPTA , Vijay BALASUBRAMANIYAN , Nicholas KLEIN , Anthony STANKUS
摘要: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. Embodiments include systems and methods for detecting fraudulent presentation attacks using multiple functional engines that implement various fraud-detection techniques, to produce calibrated scores and/or fused scores. A computer may, for example, evaluate the audio quality of speech signals within audio signals, where speech signals contain the speech portions having speaker utterances.
-
公开(公告)号:US20210326421A1
公开(公告)日:2021-10-21
申请号:US17231672
申请日:2021-04-15
摘要: Embodiments described herein provide for a voice biometrics system execute machine-learning architectures capable of passive, active, continuous, or static operations, or a combination thereof. Systems passively and/or continuously, in some cases in addition to actively and/or statically, enrolling speakers as the speakers speak into or around an edge device (e.g., car, television, radio, phone). The system identifies users on the fly without requiring a new speaker to mirror prompted utterances for reconfiguring operations. The system manages speaker profiles as speakers provide utterances to the system. Machine-learning architectures implement a passive and continuous voice biometrics system, possibly without knowledge of speaker identities. The system creates identities in an unsupervised manner, sometimes passively enrolling and recognizing known or unknown speakers. The system offers personalization and security across a wide range of applications, including media content for over-the-top services and IoT devices (e.g., personal assistants, vehicles), and call centers.
-
公开(公告)号:US20240363119A1
公开(公告)日:2024-10-31
申请号:US18646375
申请日:2024-04-25
发明人: Elie KHOURY , Ganesh SIVARAMAN , Tianxiang CHEN , Nikolay GAUBITCH , David LOONEY , Amit GUPTA , Vijay BALASUBRAMANIYAN , Nicholas KLEIN , Anthony STANKUS
IPC分类号: G10L17/00
CPC分类号: G10L17/00
摘要: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. Embodiments include systems and methods for detecting fraudulent presentation attacks using multiple functional engines that implement various fraud-detection techniques, to produce calibrated scores and/or fused scores. A computer may, for example, evaluate the audio quality of speech signals within audio signals, where speech signals contain the speech portions having speaker utterances.
-
公开(公告)号:US20240363100A1
公开(公告)日:2024-10-31
申请号:US18646228
申请日:2024-04-25
发明人: Elie KHOURY , Ganesh SIVARAMAN , Tianxiang CHEN , Nikolay GAUBITCH , David LOONEY , Amit GUPTA , Vijay BALASUBRAMANIYAN , Nicholas KLEIN , Anthony STANKUS
IPC分类号: G10L15/02
CPC分类号: G10L15/02
摘要: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. Embodiments include systems and methods for detecting fraudulent presentation attacks using multiple functional engines that implement various fraud-detection techniques, to produce calibrated scores and/or fused scores. A computer may, for example, evaluate the audio quality of speech signals within audio signals, where speech signals contain the speech portions having speaker utterances.
-
公开(公告)号:US20240363125A1
公开(公告)日:2024-10-31
申请号:US18646493
申请日:2024-04-25
发明人: Elie KHOURY , Ganesh SIVARAMAN , Tianxiang CHEN , Nikolay GAUBITCH , David LOONEY , Amit GUPTA , Vijay BALASUBRAMANIYAN , Nicholas KLEIN , Anthony STANKUS
CPC分类号: G10L17/26 , G10L17/02 , G10L17/04 , G10L25/60 , H04M3/2281 , H04M3/5183 , H04M2201/405
摘要: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. Embodiments include systems and methods for detecting fraudulent presentation attacks using multiple functional engines that implement various fraud-detection techniques, to produce calibrated scores and/or fused scores. A computer may, for example, evaluate the audio quality of speech signals within audio signals, where speech signals contain the speech portions having speaker utterances.
-
公开(公告)号:US20220084509A1
公开(公告)日:2022-03-17
申请号:US17475226
申请日:2021-09-14
发明人: Ganesh SIVARAMAN , Avrosh KUMAR , Elie KHOURY
摘要: Embodiments described herein provide for a machine-learning architecture system that enhances the speech audio of a user-defined target speaker by suppressing interfering speakers, as well as background noise and reverberations. The machine-learning architecture includes a speech separation engine for separating the speech signal of a target speaker from a mixture of multiple speakers' speech, and a noise suppression engine for suppressing various types of noise in the input audio signal. The speaker-specific speech enhancement architecture performs speaker mixture separation and background noise suppression to enhance the perceptual quality of the speech audio. The output of the machine-learning architecture is an enhanced audio signal improving the voice quality of a target speaker on a single-channel audio input containing a mixture of speaker speech signals and various types of noise.
-
公开(公告)号:US20210241776A1
公开(公告)日:2021-08-05
申请号:US17165180
申请日:2021-02-02
发明人: Ganesh SIVARAMAN , Elie KHOURY , Avrosh KUMAR
摘要: Embodiments described herein provide for systems and methods for voice-based cross-channel enrollment and authentication. The systems control for and mitigate against variations in audio signals received across any number of communications channels by training and employing a neural network architecture comprising a speaker verification neural network and a bandwidth expansion neural network. The bandwidth expansion neural network is trained on narrowband audio signals to produce and generate estimated wideband audio signals corresponding to the narrowband audio signals. These estimated wideband audio signals may be fed into one or more downstream applications, such as the speaker verification neural network or embedding extraction neural network. The speaker verification neural network can then compare and score inbound embeddings for a current call against enrolled embeddings, regardless of the channel used to receive the inbound signal or enrollment signal.
-
-
-
-
-
-
-
-
-