CONTEXT-BASED SPEECH ENHANCEMENT
    2.
    发明公开

    公开(公告)号:US20230326477A1

    公开(公告)日:2023-10-12

    申请号:US18334641

    申请日:2023-06-14

    CPC classification number: G10L21/0232 G10L21/038 G10L21/02

    Abstract: A device to perform speech enhancement includes one or more processors configured to process image data to detect at least one of an emotion, a speaker characteristic, or a noise type. The one or more processors are also configured to generate context data based at least in part on the at least one of the emotion, the speaker characteristic, or the noise type. The one or more processors are further configured to obtain input spectral data based on an input signal. The input signal represents sound that includes speech. The one or more processors are also configured to process, using a multi-encoder transformer, the input spectral data and the context data to generate output spectral data that represents a speech enhanced version of the input signal.

    USER SPEECH PROFILE MANAGEMENT
    3.
    发明申请

    公开(公告)号:US20220180859A1

    公开(公告)日:2022-06-09

    申请号:US17115158

    申请日:2020-12-08

    Abstract: A device includes processors configured to determine, in a first power mode, whether an audio stream corresponds to speech of at least two talkers. The processors are configured to, based on determining that the audio stream corresponds to speech of at least two talkers, analyze, in a second power mode, audio feature data of the audio stream to generate a segmentation result. The processors are configured to perform a comparison of a plurality of user speech profiles to an audio feature data set of a plurality of audio feature data sets of a talker-homogenous audio segment to determine whether the audio feature data set matches any of the user speech profiles. The processors are configured to, based on determining that the audio feature data set does not match any of the plurality of user speech profiles, generate a user speech profile based on the plurality of audio feature data sets.

    SOURCE SPEECH MODIFICATION BASED ON AN INPUT SPEECH CHARACTERISTIC

    公开(公告)号:US20240087597A1

    公开(公告)日:2024-03-14

    申请号:US17931755

    申请日:2022-09-13

    CPC classification number: G10L25/63 G10L25/21

    Abstract: A device includes one or more processors configured to process an input audio spectrum of input speech to detect a first characteristic associated with the input speech. The one or more processors are also configured to select, based at least in part on the first characteristic, one or more reference embeddings from among multiple reference embeddings. The one or more processors are further configured to process a representation of source speech, using the one or more reference embeddings, to generate an output audio spectrum of output speech.

    CONTROLLABLE DIFFUSION-BASED SPEECH GENERATIVE MODEL

    公开(公告)号:US20250078810A1

    公开(公告)日:2025-03-06

    申请号:US18494640

    申请日:2023-10-25

    Abstract: Systems and techniques described herein relate to a diffusion-based model for generating converted speech from a source speech based on target speech. For example, a device may extract first prosody data from input data and may generate a content embedding based on the input data. The device may extract second prosody data from target speech, generate a speaker embedding from the target speech, and generate a prosody embedding from the second prosody data. The device may generate, based on the first prosody data and the prosody embedding, converted prosody data. The device may then generate a converted spectrogram based on the converted prosody data, the speaker embedding, and the content embedding.

    SPEAKER VERIFICATION BASED ON A SPEAKER TEMPLATE

    公开(公告)号:US20200286491A1

    公开(公告)日:2020-09-10

    申请号:US16296733

    申请日:2019-03-08

    Abstract: A device includes a processor configured to determine a feature vector based on an utterance and to determine a first embedding vector by processing the feature vector using a trained embedding network. The processor is configured to determine a first distance metric based on distances between the first embedding vector and each embedding vector of a speaker template. The processor is configured to determine, based on the first distance metric, that the utterance is verified to be from a particular user. The processor is configured to, based on a comparison of a first particular distance metric associated with the first embedding vector to a second distance metric associated with a first test embedding vector of the speaker template, generate an updated speaker template by adding the first embedding vector as a second test embedding vector and removing the first test embedding vector from test embedding vectors of the speaker template.

    SHARED SPEECH PROCESSING NETWORK FOR MULTIPLE SPEECH APPLICATIONS

    公开(公告)号:US20220165285A1

    公开(公告)日:2022-05-26

    申请号:US17650595

    申请日:2022-02-10

    Abstract: A device to process speech includes a speech processing network that includes an input configured to receive audio data corresponding to audio captured by one or more microphones. The speech processing network also includes one or more network layers configured to process the audio data to generate a network output. The speech processing network includes an output configured to be coupled to multiple speech application modules to enable the network output to be provided as a common input to each of the multiple speech application modules. A first speech application module corresponds to a speaker verifier, and a second speech application module corresponds to a speech recognition network.

Patent Agency Ranking