Abstract:
A method for speech processing includes determining an unvoicing parameter for a first frame of a speech signal and determining a smoothed unvoicing parameter for the first frame by weighting the unvoicing parameter of the first frame and a smoothed unvoicing parameter of a second frame. The unvoicing parameter reflects a speech characteristic of the first frame. The smoothed unvoicing parameter of the second frame is weighted less heavily when the smoothed unvoicing parameter of the second frame is greater than the unvoicing parameter of the first frame. The method further includes computing a difference, by a processor, between the unvoicing parameter of the first frame and the smoothed unvoicing parameter of the first frame, and determining a classification of the first frame according to the computed difference. The classification includes unvoiced speech or voiced speech. The first frame is processed in accordance with the classification of the first frame.
Abstract:
A method for processing speech signals prior to encoding a digital signal comprising audio data includes selecting frequency domain coding or time domain coding based on a coding bit rate to be used for coding the digital signal and a short pitch lag detection of the digital signal.
Abstract:
In accordance with an embodiment of the present invention, a method for speech processing includes determining an unvoicing/voicing parameter reflecting a characteristic of unvoiced/voicing speech in a current frame of a speech signal comprising a plurality of frames. A smoothed unvoicing/voicing parameter is determined to include information of the unvoicing/voicing parameter in a frame prior to the current frame of the speech signal. A difference between the unvoicing/voicing parameter and the smoothed unvoicing/voicing parameter is computed. The method further includes generating an unvoiced/voiced decision point for determining whether the current frame comprises unvoiced speech or voiced speech using the computed difference as a decision parameter.
Abstract:
The quality of encoded signals can be improved by reclassifying AUDIO signals carrying non-speech data as VOICE signals when periodicity parameters of the signal satisfy one or more criteria. In some embodiments, only low or medium bit rate signals are considered for re-classification. The periodicity parameters can include any characteristic or set of characteristics indicative of periodicity. For example, the periodicity parameter may include pitch differences between subframes in the audio signal, a normalized pitch correlation for one or more subframes, an average normalized pitch correlation for the audio signal, or combinations thereof. Audio signals which are re-classified as VOICED signals may be encoded in the time-domain, while audio signals that remain classified as AUDIO signals may be encoded in the frequency-domain.
Abstract:
In an embodiment, a method of transmitting an input audio signal is disclosed. A first coding error of the input audio signal with a scalable codec having a first enhancement layer is encoded, and a second coding error is encoded using a second enhancement layer after the first enhancement layer. Encoding the second coding error includes coding fine spectrum coefficients of the second coding error to produce coded fine spectrum coefficients, and coding a spectral envelope of the second coding error to produce a coded spectral envelope. The coded fine spectrum coefficients and the coded spectral envelope are transmitted.