摘要:
An approach for improving quality of speech synthesized using analysis-by-synthesis (ABS ) coders is presented. An unstable perceptual quality in analysis-by-synthesis type speech coding (e.g. CELP) may occur because the periodicity degree in a voiced speech signal may vary significantly for different segments of the voiced speech. Thus the present invention uses a voicing index, which may indicate the periodicity degree of the speech signal, to control and improve ABS type speech coding. The voicing index may be used to improve the quality stability by controlling encoder and/or decoder in: fixed-codebook (301) short-term enhancement including the spectrum tilt; perceptual weighting filter; sub-fixed codebook determination; LPC interpolation (304); fixed-codebook pitch enhancement; post-pitch enhancement; noise injection into the high-frequency band at decoder; LTP sync window; signal decomposition, etc.
摘要:
Vector quantization techniques reduce the effective bit rate to 600 bps while maintaining intelligible speech. Four frames of speech are combined into one frame (104). The system uses mixed excitation linear prediction speech model parameters to quantized the frame and achieve a fixed rate of 600 bps (104). The system allows voice communication over bandwidth constrained channels.
摘要:
Encoding a sequence of digital speech samples into a bit stream includes dividing the digital speech samples into one or more frames, computing model parameters for a frame, and quantizing the model parameters to produce pitch bits conveying pitch information, voicing bits conveying voicing information, and gain bits conveying signal level information. One or more of the pitch bits are combined with one or more of the voicing bits and one or more of the gain bits to create a first parameter codeword that is encoded with an error control code to produce a first FEC codeword that is included in a bit stream for the frame. The process may be reversed to decode the bit stream.
摘要:
A system or method for modeling a signal, such as a speech signal, wherein harmonic frequencies and amplitudes are identified (106) and the harmonic magnitudes are interpolated (110) to obtain spectral magnitudes at a set of fixed frequencies. An inverse transform is applied (112) to the spectral magnitudes to obtain a pseudo auto-correlation sequence, from which linear prediction coefficients are calculated (114). From the linear prediction coefficients, model harmonic magnitudes are generated by sampling the spectral envelope (118) defined by the linear prediction coefficients. A set of scale factors are then calculated (120) as the ratio of the harmonic magnitudes to the model harmonic magnitudes and interpolated to obtain a second set of scale factors (122) at the set of fixed frequencies. The spectral envelope magnitudes at the set of fixed frequencies (124) are multiplied by the second set of scale factors (126) to obtain new spectral magnitudes and the process is iterated to obtain final linear prediction coefficients.
摘要:
An enhanced low-bit rate parametric voice coder that groups a number of frames from an underlying frame-based vocoder, such as MELP, into a superframe structure. Parameters are extracted from the group of underlying frames and quantized into the superframe which allows the bit rate of the underlying coding to be reduced without increasing the distortion. The speech data coded in the superframe structure can then be directly synthesized to speech or may be transcoded to a format so that an underlying frame-based vocoder performs the synthesis. The superframe structure includes additional error detection and correction data to reduce the distortion caused by the communication of bit errors.
摘要:
A method includes determining an error condition during a bandwidth transition period of an encoded audio signal. The error condition corresponds to a second frame of the encoded audio signal, where the second frame sequentially follows a first frame in the encoded audio signal. The method also includes generating audio data corresponding to a first frequency band of the second frame based on audio data corresponding to the first frequency band of the first frame. The method further includes re-using a signal corresponding to a second frequency band of the first frame to synthesize audio data corresponding to the second frequency band of the second frame.
摘要:
Embodiments of the present invention provide a frame loss compensation processing method and apparatus. The method includes: determining, by using a lost-frame flag bit, whether an i th frame is a lost frame; and when the i th frame is a lost frame, estimating a spectrum frequency parameter, a pitch period, and a gain of the i th frame according to at least one of an inter-frame relationship between first N frames of the i th frame or an intra-frame relationship between first N frames of the i th frame, where the inter-frame relationship between the first N frames includes at least one of correlation between the first N frames or energy stability between the first N frames, and the intra-frame relationship between the first N frames includes at least one of inter-subframe correlation between the first N frames or inter-subframe energy stability between the first N frames. A parameter of the i th frame is determined by using the signal correlation between the first N frames, the signal energy stability between the first N frames, intra-frame signal correlation of each frame, and intra-frame signal energy stability of each frame. A relationship between signals is considered, so as to obtain a more accurate parameter of the i th frame by means of estimation, and improve voice signal decoding quality.
摘要:
A method includes generating a high-band residual signal based on a high-band portion of an audio signal. The method also includes generating a harmonically extended signal at least partially based on a low-band portion of the audio signal. The method further includes determining a mixing factor based on the high-band residual signal, the harmonically extended signal, and modulated noise. The modulated noise is at least partially based on the harmonically extended signal and white noise.