CONTEXT-BASED SPEECH ENHANCEMENT
    2.
    发明公开

    公开(公告)号:US20230326477A1

    公开(公告)日:2023-10-12

    申请号:US18334641

    申请日:2023-06-14

    CPC classification number: G10L21/0232 G10L21/038 G10L21/02

    Abstract: A device to perform speech enhancement includes one or more processors configured to process image data to detect at least one of an emotion, a speaker characteristic, or a noise type. The one or more processors are also configured to generate context data based at least in part on the at least one of the emotion, the speaker characteristic, or the noise type. The one or more processors are further configured to obtain input spectral data based on an input signal. The input signal represents sound that includes speech. The one or more processors are also configured to process, using a multi-encoder transformer, the input spectral data and the context data to generate output spectral data that represents a speech enhanced version of the input signal.

    SOURCE SPEECH MODIFICATION BASED ON AN INPUT SPEECH CHARACTERISTIC

    公开(公告)号:US20240087597A1

    公开(公告)日:2024-03-14

    申请号:US17931755

    申请日:2022-09-13

    CPC classification number: G10L25/63 G10L25/21

    Abstract: A device includes one or more processors configured to process an input audio spectrum of input speech to detect a first characteristic associated with the input speech. The one or more processors are also configured to select, based at least in part on the first characteristic, one or more reference embeddings from among multiple reference embeddings. The one or more processors are further configured to process a representation of source speech, using the one or more reference embeddings, to generate an output audio spectrum of output speech.

    CONTROLLABLE DIFFUSION-BASED SPEECH GENERATIVE MODEL

    公开(公告)号:US20250078810A1

    公开(公告)日:2025-03-06

    申请号:US18494640

    申请日:2023-10-25

    Abstract: Systems and techniques described herein relate to a diffusion-based model for generating converted speech from a source speech based on target speech. For example, a device may extract first prosody data from input data and may generate a content embedding based on the input data. The device may extract second prosody data from target speech, generate a speaker embedding from the target speech, and generate a prosody embedding from the second prosody data. The device may generate, based on the first prosody data and the prosody embedding, converted prosody data. The device may then generate a converted spectrogram based on the converted prosody data, the speaker embedding, and the content embedding.

Patent Agency Ranking