Abstract:
A speech/audio encoding device for selectively allocating bits for higher precision encoding. The speech/audio encoding device receives a time-domain speech/audio input signal, transforms the speech/audio input signal into a frequency domain, and quantizes an energy envelope corresponding to an energy level for a frequency spectrum of the speech/audio input signal. The speech/audio encoding device further groups quantized energy envelopes into a plurality of groups, determines a perceptual significant group including one or more significant bands and a local-peak frequency, and allocates bits to a plurality of subbands corresponding to the grouped quantized energy envelopes, in which each of the subbands is obtained by splitting the frequency spectrum of the speech/audio input signal. The speech/audio encoding device encodes the frequency spectrum using the bits allocated to the subbands.
Abstract:
The embodiments of the present invention improves conventional attenuation schemes by replacing constant attenuation with an adaptive attenuation scheme that allows more aggressive attenuation, without introducing audible change of signal frequency characteristics.
Abstract:
A method for processing audio data includes determining a first common scalefactor value for representing quantized audio data in a frame. A second common scalefactor value is determined for representing the quantized audio data in the frame. A line equation common scalefactor value is determined from the first and second common scalefactor values.
Abstract:
An encoder for providing an audio stream on the basis of a transform-domain representation of an input audio signal includes a quantization error calculator configured to determine a multi-band quantization error over a plurality of frequency bands of the input audio signal for which separate band gain information is available. The encoder also includes an audio stream provider for providing the audio stream such that the audio stream includes information describing an audio content of the frequency bands and information describing the multi-band quantization error.A decoder for providing a decoded representation of an audio signal on the basis of an encoded audio stream representing spectral components of frequency bands of the audio signal includes a noise filler for introducing noise into spectral components of a plurality of frequency bands to which separate frequency band gain information is associated on the basis of a common multi-band noise intensity value.
Abstract:
It is disclosed inter alia a method comprising: estimating a value of entropy for a multi-channel audio signal; determining a channel configuration of the multi-channel audio signal from the value of entropy; and encoding the multi-channel audio signal, wherein the mode of encoding is dependent on the channel configuration.
Abstract:
Provided are an apparatus and a method for encoding and decoding audio signals, in which when determining a masking threshold according to a psychoacoustic model, accurate results may be obtained for a short window-based audio signal as well as for a long window-based audio signal. The apparatus for encoding audio signals according to the present invention comprises a masking threshold determining unit configured to determine, on the basis of a frame length of a first window having a divided audio signal, a masking threshold for a second window that has a different frame length from that of the first window.
Abstract:
A method includes determining a first modeled high-band signal based on a low-band excitation signal of an audio signal, where the audio signal includes a high-band portion and a low-band portion. The method also includes determining scaling factors based on energy of sub-frames of the first modeled high-band signal and energy of corresponding sub-frames of the high-band portion of the audio signal. The method includes applying the scaling factors to a modeled high-band excitation signal to determine a scaled high-band excitation signal and determining a second modeled high-band signal based on the scaled high-band excitation signal. The method includes determining gain parameters based on the second modeled high-band signal and the high-band portion of the audio signal.
Abstract:
An apparatus and method for decoding audio data. The apparatus for decoding the audio data may perform block data unpacking by preferring a channel order to a block order from a bitstream, and perform dithering through preferring a block order to a channel order. Complexity in decoding may be reduced through integrating bitstream searching and the bock data unpacking, and a dithering error may be prevented through processing the block data unpacking and the dithering separately.
Abstract:
In a method of perceptual transform coding of audio signals in a telecommunication system, performing the steps of determining transform coefficients representative of a time to frequency transformation of a time segmented input audio signal; determining a spectrum of perceptual sub-bands for said input audio signal based on said determined transform coefficients; determining masking thresholds for each said sub-band based on said determined spectrum; computing scale factors for each said sub-band based on said determined masking thresholds, and finally adapting said computed scale factors for each said sub-band to prevent energy loss for perceptually relevant sub-bands.
Abstract:
A pitch-synchronous method and system for speech coding using timbre vectors is disclosed. On the encoder side, speech signal is segmented into pitch-synchronous frames without overlap, then converted into a pitch-synchronous amplitude spectrum using FFT. Using Laguerre functions, the amplitude spectrum is transformed into a timbre vector. Using vector quantization, each timbre vector is converted to a timbre index based on a timbre codebook. The intensity and pitch are also converted into indices respectively using scalar quantization. Those indices are transmitted as encoded speech. On the decoder side, by looking up the same codebooks, pitch, intensity and the timbre vector are recovered. Using Laguerre functions, the amplitude spectrum is recovered. Using Kramers-Kronig relations, the phase spectrum is recovered. Using FFT, the elementary waves are regenerated, and superposed to become the speech signal.