DEEPFAKE DETECTION
    2.
    发明公开
    DEEPFAKE DETECTION 审中-公开

    公开(公告)号:US20240355334A1

    公开(公告)日:2024-10-24

    申请号:US18388457

    申请日:2023-11-09

    IPC分类号: G10L17/06

    CPC分类号: G10L17/06

    摘要: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.

    CROSS-LINGUAL SPEAKER RECOGNITION

    公开(公告)号:US20230137652A1

    公开(公告)日:2023-05-04

    申请号:US17977521

    申请日:2022-10-31

    IPC分类号: G10L17/04 G10L17/10

    摘要: Disclosed are systems and methods including computing-processes executing machine-learning architectures for voice biometrics, in which the machine-learning architecture implements one or more language compensation functions. Embodiments include an embedding extraction engine (sometimes referred to as an “embedding extractor”) that extracts speaker embeddings and determines a speaker similarity score for determine or verifying the likelihood that speakers in different audio signals are the same speaker. The machine-learning architecture further includes a multi-class language classifier that determines a language likelihood score that indicates the likelihood that a particular audio signal includes a spoken language. The features and functions of the machine-learning architecture described herein may implement the various language compensation techniques to provide more accurate speaker recognition results, regardless of the language spoken by the speaker.

    AUDIOVISUAL DEEPFAKE DETECTION
    9.
    发明申请

    公开(公告)号:US20220121868A1

    公开(公告)日:2022-04-21

    申请号:US17503152

    申请日:2021-10-15

    IPC分类号: G06K9/00 G06K9/62 G10L17/22

    摘要: The embodiments execute machine-learning architectures for biometric-based identity recognition (e.g., speaker recognition, facial recognition) and deepfake detection (e.g., speaker deepfake detection, facial deepfake detection). The machine-learning architecture includes layers defining multiple scoring components, including sub-architectures for speaker deepfake detection, speaker recognition, facial deepfake detection, facial recognition, and lip-sync estimation engine. The machine-learning architecture extracts and analyzes various types of low-level features from both audio data and visual data, combines the various scores, and uses the scores to determine the likelihood that the audiovisual data contains deepfake content and the likelihood that a claimed identity of a person in the video matches to the identity of an expected or enrolled person. This enables the machine-learning architecture to perform identity recognition and verification, and deepfake detection, in an integrated fashion, for both audio data and visual data.

    ROBUST SPOOFING DETECTION SYSTEM USING DEEP RESIDUAL NEURAL NETWORKS

    公开(公告)号:US20210233541A1

    公开(公告)日:2021-07-29

    申请号:US17155851

    申请日:2021-01-22

    摘要: Embodiments described herein provide for systems and methods for implementing a neural network architecture for spoof detection in audio signals. The neural network architecture contains a layers defining embedding extractors that extract embeddings from input audio signals. Spoofprint embeddings are generated for particular system enrollees to detect attempts to spoof the enrollee's voice. Optionally, voiceprint embeddings are generated for the system enrollees to recognize the enrollee's voice. The voiceprints are extracted using features related to the enrollee's voice. The spoofprints are extracted using features related to features of how the enrollee speaks and other artifacts. The spoofprints facilitate detection of efforts to fool voice biometrics using synthesized speech (e.g., deepfakes) that spoof and emulate the enrollee's voice.