Speech separation model training method and apparatus, storage medium and computer device

    公开(公告)号:US11908455B2

    公开(公告)日:2024-02-20

    申请号:US17672565

    申请日:2022-02-15

    CPC classification number: G10L15/063 G10L15/05 G10L15/16

    Abstract: A speech separation model training method and apparatus, a computer-readable storage medium, and a computer device are provided, the method including: obtaining first audio and second audio, the first audio including target audio and having corresponding labeled audio, and the second audio including noise audio. obtaining an encoding model, an extraction model, and an initial estimation model; performing unsupervised training on the encoding model, the extraction model, and the estimation model according to the second audio, and adjusting model parameters of the extraction model and the estimation model; performing supervised training on the encoding model and the extraction model according to the first audio and the labeled audio corresponding to the first audio, and adjusting a model parameter of the encoding model; continuously performing the unsupervised training and the supervised training, so that the unsupervised training and the supervised training overlap, and the training is not finished until a training stop condition is met.

    Voice synthesis method, model training method, device and computer device

    公开(公告)号:US12014720B2

    公开(公告)日:2024-06-18

    申请号:US16999989

    申请日:2020-08-21

    CPC classification number: G10L13/00 G10L19/02

    Abstract: This application relates to a speech synthesis method and apparatus, a model training method and apparatus, and a computer device. The method includes: obtaining to-be-processed linguistic data; encoding the linguistic data, to obtain encoded linguistic data; obtaining an embedded vector for speech feature conversion, the embedded vector being generated according to a residual between synthesized reference speech data and reference speech data that correspond to the same reference linguistic data; and decoding the encoded linguistic data according to the embedded vector, to obtain target synthesized speech data on which the speech feature conversion is performed. The solution provided in this application can prevent quality of a synthesized speech from being affected by a semantic feature in a mel-frequency cepstrum.

    Inter-channel feature extraction method, audio separation method and apparatus, and computing device

    公开(公告)号:US11908483B2

    公开(公告)日:2024-02-20

    申请号:US17401125

    申请日:2021-08-12

    CPC classification number: G10L19/008 G10L25/03 G10L25/30 H04S3/02 H04S5/00

    Abstract: This application relates to a method of extracting an inter channel feature from a multi-channel multi-sound source mixed audio signal performed at a computing device. The method includes: transforming one channel component of a multi-channel multi-sound source mixed audio signal into a single-channel multi-sound source mixed audio representation in a feature space; performing a two-dimensional dilated convolution on the multi-channel multi-sound source mixed audio signal to extract inter-channel features; performing a feature fusion on the single-channel multi-sound source mixed audio representation and the inter-channel features; estimating respective weights of sound sources in the single-channel multi-sound source mixed audio representation based on a fused multi-channel multi-sound source mixed audio feature; obtaining respective representations of the plurality of sound sources according to the single-channel multi-sound source mixed audio representation and the respective weights; and transforming the respective representations of the sound sources into respective audio signals of the plurality of sound sources.

    TRAINING METHOD AND DEVICE FOR AUDIO SEPARATION NETWORK, AUDIO SEPARATION METHOD AND DEVICE, AND MEDIUM

    公开(公告)号:US20220180882A1

    公开(公告)日:2022-06-09

    申请号:US17682399

    申请日:2022-02-28

    Abstract: A method of training an audio separation network is provided. The method includes obtaining a first separation sample set, the first separation sample set including at least two types of audio with dummy labels, obtaining a first sample set by performing interpolation on the first separation sample set based on perturbation data, obtaining a second separation sample set by separating the first sample set using an unsupervised network, determining losses of second separation samples in the second separation sample set, and adjusting network parameters of the unsupervised network based on the losses of the second separation samples, such that a first loss of a first separation result outputted by an adjusted unsupervised network meets a convergence condition.

    Speech keyword recognition method and apparatus, computer-readable storage medium, and computer device

    公开(公告)号:US11222623B2

    公开(公告)日:2022-01-11

    申请号:US16884350

    申请日:2020-05-27

    Inventor: Jun Wang Dan Su Dong Yu

    Abstract: A speech keyword recognition method includes: obtaining first speech segments based on a to-be-recognized speech signal; obtaining first probabilities respectively corresponding to the first speech segments by using a preset first classification model. A first probability of a first speech segment is obtained from probabilities of the first speech segment respectively corresponding to pre-determined word segmentation units of a pre-determined keyword. The method also includes obtaining second speech segments based on the to-be-recognized speech signal, and respectively generating first prediction characteristics of the second speech segments based on first probabilities of first speech segments that correspond to each second speech segment; performing classification based on the first prediction characteristics by using a preset second classification model, to obtain second probabilities respectively corresponding to the second speech segments related to the pre-determined keyword; and determining, based on the second probabilities, whether the pre-determined keyword exists in the to-be-recognized speech signal.

Patent Agency Ranking