Abstract:
An encoding method includes determining an adaptive broadening factor based on a quantized line spectral frequency (LSF) vector of a first channel of a current frame of an audio signal and an LSF vector of a second channel of the current frame, and writing the quantized LSF vector and the adaptive broadening factor into a bitstream.
Abstract:
This application discloses an IPD parameter encoding method, including: obtaining a reference parameter used to determine an IPD parameter encoding scheme of a current frame of a multi-channel signal; determining the IPD parameter encoding scheme of the current frame based on the reference parameter, where the determined IPD parameter encoding scheme of the current frame is one of at least two preset IPD parameter encoding schemes; and processing an IPD parameter of the current frame based on the determined IPD parameter encoding scheme of the current frame. The technical solutions provided in this application can improve encoding quality of the multi-channel signal.
Abstract:
A stereo signal encoding method includes: obtaining a residual signal encoding parameter of a current frame of a stereo signal based on downmixed signal energy and residual signal energy of each of M sub-bands of the current frame, where the residual signal encoding parameter indicates whether to encode residual signals of the M sub-bands; determining whether to encode the residual signals based on the residual signal encoding parameter; and encoding the residual signals when it is determined that the residual signals need to be encoded.
Abstract:
A delay estimation method includes determining a cross-correlation coefficient of a multi-channel signal of a current frame, determining a delay track estimation value of the current frame based on buffered inter-channel time difference information of at least one past frame, determining an adaptive window function of the current frame, performing weighting on the cross-correlation coefficient based on the delay track estimation value of the current frame and the adaptive window function of the current frame, to obtain a weighted cross-correlation coefficient, and determining an inter-channel time difference of the current frame based on the weighted cross-correlation coefficient.
Abstract:
In a stereo encoding method a channel combination encoding solution of a current frame is first obtained, and then a quantized channel combination ratio factor of the current frame and an encoding index of the quantized channel combination ratio factor are obtained based on the obtained channel combination encoding solution, so that an obtained primary channel signal and secondary channel signal of the current frame meet a characteristic of the current frame.
Abstract:
A stereo signal encoding method includes obtaining a residual signal encoding parameter of a current frame of a stereo signal based on downmixed signal energy and residual signal energy of each of M sub-bands of the current frame, where the residual signal encoding parameter indicates whether to encode residual signals of the M sub-bands, determining whether to encode the residual signals based on the residual signal encoding parameter, and encoding the residual signals when it is determined that the residual signals need to be encoded.
Abstract:
A method for encoding a multi-channel signal and an encoder, where the encoding method includes obtaining a multi-channel signal of a current frame, determining an initial inter-channel time difference (ITD) value of the current frame, controlling, based on characteristic information of the multi-channel signal, a quantity of target frames that are allowed to appear continuously, where the characteristic information includes at least one of a signal-to-noise ratio of the multi-channel signal or a peak feature of cross correlation coefficients of the multi-channel signal, and an ITD value of a previous frame of the target frame is reused as an ITD value of the target frame, determining an ITD value of the current frame based on the initial ITD value and the quantity of target frames allowed to appear continuously, and encoding the multi-channel signal based on the ITD value of the current frame.
Abstract:
A method and an apparatus for determining a stability factor of an adaptive filter is presented. The method includes: determining, according to first input signal that are input to an adaptive filter, a reference input matrix of the first input signal; determining a stability parameter of the first input signal according to the reference input matrix; and determining a stability factor of the adaptive filter according to the stability parameter. According to the method and apparatus for determining a stability factor of an adaptive filter provided in the embodiments of the present application, the stability factor of the adaptive filter can be adaptively obtained according to a stability feature of the first input signal, and the adaptive filter can reach a balance between a convergence speed and steady state error performance.
Abstract:
An audio signal processing method and apparatus and a differential beamforming method and apparatus to resolve a problem that an existing audio signal processing system cannot process audio signals in multiple application scenarios at the same time. The method includes determining a super-directional differential beamforming weighting coefficient, acquiring an audio input signal and determining a current application scenario and an audio output signal, acquiring, a weighting coefficient corresponding to the current application scenario, performing super-directional differential beamforming processing on the audio input signal using the acquired weighting coefficient in order to obtain a super-directional differential beamforming signal in the current application scenario, and performing processing on the formed signal to obtain a final audio signal required by the current application scenario. By using this method, a requirement that different application scenarios require different audio signal processing manners can be met.
Abstract:
An audio information processing method and apparatus are provided. The method includes determining a first camera, acquiring first audio information collected by the first audio collecting unit, acquiring second audio information collected by the second audio collecting unit, processing the first audio information and the second audio information to obtain third audio information, where for the third audio information, a gain of a sound signal coming from a shooting direction of the first camera is a first gain and a gain of a sound signal coming from an opposite direction of the shooting direction is a second gain, and outputting the third audio information. When the method or the apparatus of the present application is adopted, in synchronously output audio information, volume of a target sound source in a final video image is higher than volume of noise or an interfering sound source outside the video image.