CHANNEL-COMPENSATED LOW-LEVEL FEATURES FOR SPEAKER RECOGNITION

    公开(公告)号:US20230290357A1

    公开(公告)日:2023-09-14

    申请号:US18321353

    申请日:2023-05-22

    CPC classification number: G10L17/20 G10L17/02 G10L17/04 G10L17/18 G10L19/028

    Abstract: A system for generating channel-compensated features of a speech signal includes a channel noise simulator that degrades the speech signal, a feed forward convolutional neural network (CNN) that generates channel-compensated features of the degraded speech signal, and a loss function that computes a difference between the channel-compensated features and handcrafted features for the same raw speech signal. Each loss result may be used to update connection weights of the CNN until a predetermined threshold loss is satisfied, and the CNN may be used as a front-end for a deep neural network (DNN) for speaker recognition/verification. The DNN may include convolutional layers, a bottleneck features layer, multiple fully-connected layers, and an output layer. The bottleneck features may be used to update connection weights of the convolutional layers, and dropout may be applied to the convolutional layers.

    CROSS-LINGUAL SPEAKER RECOGNITION
    114.
    发明申请

    公开(公告)号:US20230137652A1

    公开(公告)日:2023-05-04

    申请号:US17977521

    申请日:2022-10-31

    Abstract: Disclosed are systems and methods including computing-processes executing machine-learning architectures for voice biometrics, in which the machine-learning architecture implements one or more language compensation functions. Embodiments include an embedding extraction engine (sometimes referred to as an “embedding extractor”) that extracts speaker embeddings and determines a speaker similarity score for determine or verifying the likelihood that speakers in different audio signals are the same speaker. The machine-learning architecture further includes a multi-class language classifier that determines a language likelihood score that indicates the likelihood that a particular audio signal includes a spoken language. The features and functions of the machine-learning architecture described herein may implement the various language compensation techniques to provide more accurate speaker recognition results, regardless of the language spoken by the speaker.

    CALLER VERIFICATION VIA CARRIER METADATA

    公开(公告)号:US20230014180A1

    公开(公告)日:2023-01-19

    申请号:US17948991

    申请日:2022-09-20

    Abstract: Embodiments described herein provide for passive caller verification and/or passive fraud risk assessments for calls to customer call centers. Systems and methods may be used in real time as a call is coming into a call center. An analytics server of an analytics service looks at the purported Caller ID of the call, as well as the unaltered carrier metadata, which the analytics server then uses to generate or retrieve one or more probability scores using one or more lookup tables and/or a machine-learning model. A probability score indicates the likelihood that information derived using the Caller ID information has occurred or should occur given the carrier metadata received with the inbound call. The one or more probability scores be used to generate a risk score for the current call that indicates the probability of the call being valid (e.g., originated from a verified caller or calling device, non-fraudulent).

    LIMITING IDENTITY SPACE FOR VOICE BIOMETRIC AUTHENTICATION

    公开(公告)号:US20220392453A1

    公开(公告)日:2022-12-08

    申请号:US17832404

    申请日:2022-06-03

    Abstract: Disclosed are systems and methods including computing-processes executing machine-learning architectures extract vectors representing disparate types of data and output predicted identities of users accessing computing services, without express identity assertions, and across multiple computing services, analyzing data from multiple modalities, for various user devices, and agnostic to architectures hosting the disparate computing service. The system invokes the identification operations of the machine-learning architecture, which extracts biometric embeddings from biometric data and context embeddings representing all or most of the types of metadata features analyzed by the system. The context embeddings help identify a subset of potentially matching identities of possible users, which limits the number of biometric-prints the system compares against an inbound biometric embedding for authentication. The types of extracted features originate from multiple modalities, including metadata from data communications, audio signals, and images. In this way, the embodiments apply a multi-modality machine-learning architecture.

    UNSUPERVISED KEYWORD SPOTTING AND WORD DISCOVERY FOR FRAUD ANALYTICS

    公开(公告)号:US20220301554A1

    公开(公告)日:2022-09-22

    申请号:US17833674

    申请日:2022-06-06

    Inventor: Hrishikesh Rao

    Abstract: Embodiments described herein provide for a computer that detects one or more keywords of interest using acoustic features, to detect or query commonalities across multiple fraud calls. Embodiments described herein may implement unsupervised keyword spotting (UKWS) or unsupervised word discovery (UWD) in order to identify commonalities across a set of calls, where both UKWS and UWD employ Gaussian Mixture Models (GMM) and one or more dynamic time-warping algorithms. A user may indicate a training exemplar or occurrence of call-specific information, referred to herein as “a named entity,” such as a person's name, an account number, account balance, or order number. The computer may perform a redaction process that computationally nullifies the import of the named entity in the modeling processes described herein.

Patent Agency Ranking