Abstract:
One or more attributes (e.g., pan, gain, etc.) associated with one or more objects (e.g., an instrument) of a stereo or multi-channel audio signal can be modified to provide remix capability.
Abstract:
The purpose of the invention is to bridge the gap between parametric multi-channel audio coding and matrixed-surround multi-channel coding by gradually improving the sound of an up-mix signal while raising the bit-rate consumed by the side-information starting from 0 up to the bit-rates of the parametric methods. More specifically, it provides a method of flexibly choosing an “operating point” somewhere between matrixed-surround (no side-information, limited audio quality) and fully parametric reconstruction (full side-information rate required, good quality). This operating point can be chosen dynamically (i.e. varying over time) and in response to the permissible side-information rate, as it is dictated by the individual application.
Abstract:
An auditory scene is synthesized by applying two or more different sets of one or more spatial parameters (e.g., an inter-ear level difference (ILD), inter-ear time difference (ITD), and/or head-related transfer function (HRTF)) to two or more different frequency bands of a combined audio signal, where each different frequency band is treated as if it corresponded to a single audio source in the auditory scene. In one embodiment, the combined audio signal corresponds to the combination of two or more different source signals, where each different frequency band corresponds to a region of the combined audio signal in which one of the source signals dominates the others. In this embodiment, the different sets of spatial parameters are applied to synthesize an auditory scene comprising the different source signals. In another embodiment, the combined audio signal corresponds to the combination of the left and right audio signals of a binaural signal corresponding to an input auditory scene. In this embodiment, the different sets of spatial parameters are applied to reconstruct the input auditory scene. In either case, transmission bandwidth requirements are reduced by reducing to one the number of different audio signals that need to be transmitted to a receiver configured to synthesize/reconstruct the auditory scene.
Abstract:
Methods and apparatus are disclosed for controlling a buffer in a communication system, such as a digital audio broadcasting (DAB) communication system. A more consistent perceptual quality over time provides for a more pleasing auditory experience to a listener. The disclosed bit allocation process determines, for each frame, a distortion d[k] at which the frame is to be encoded. The distortion d[k] is determined to minimize (i) the probability for a buffer overflow, and (ii) the variation of perceived distortion over time. A buffer level is controlled by partitioning a signal into a sequence of successive frames; estimating a distortion rate for a number of frames; and selecting a distortion such that the variance of the buffer level is bounded by a specified value.
Abstract:
A method and apparatus are disclosed for controlling a buffer in a digital audio broadcasting (DAB) communication system. An audio encoder marks a frame as “dropped” whenever a buffer overflow might occur. Only a small number of bits are utilized to process a lost frame, thereby preventing the buffer from overflowing and allowing the encoder buffer-level to quickly recover from the potential overflow condition. The audio encoder optionally sets a flag that provides an indication to the receivers that a frame has been lost. If a “frame lost” condition is detected by a receiver, the receiver can optionally employ mitigation techniques to reduce the impact of the lost frame(s).
Abstract:
Methods and apparatus are disclosed for controlling a buffer in a communication system, such as a digital audio broadcasting (DAB) communication system. A more consistent perceptual quality over time provides for a more pleasing auditory experience to a listener. The disclosed bit allocation process determines, for each frame, a distortion d[k] at which the frame is to be encoded. The distortion d[k] is determined to minimize (i) the probability for a buffer overflow, and (ii) the variation of perceived distortion over time. A buffer level is controlled by partitioning a signal into a sequence of successive frames; estimating a distortion rate for a number of frames; and selecting a distortion such that the variance of the buffer level is bounded by a specified value.
Abstract:
The following coding scenario is addressed: A number of audio source signals need to be transmitted or stored for the purpose of mixing wave field synthesis, multi-channel surround, or stereo signals after decoding the source signals. The proposed technique offers significant coding gain when jointly coding the source signals, compared to separately coding them, even when no redundancy is present between the source signals. This is possible by considering statistical properties of the source signals, the properties of mixing techniques, and spatial hearing. The sum of the source signals is transmitted plus the statistical properties of the source signals which mostly determine the perceptually important spatial cues of the final mixed audio channels. Source signals are recovered at the receiver such that their statistical properties approximate the corresponding properties of the original source signals. Subjective evaluations indicate that high audio quality is achieved by the proposed scheme.
Abstract:
One or more attributes (e.g., pan, gain, etc.) associated with one or more objects (e.g., an instrument) of a stereo or multi-channel audio signal can be modified to provide remix capability. An audio decoding apparatus obtains an audio signal having a set of objects and side information. The apparatus obtains a set of mix parameters from a user input and an attenuation factor from the set of mix parameters. The apparatus then generates a plural-channel audio signal using at least one of the side information, the attenuation factor or the set of mix parameters.
Abstract:
A plural-channel audio signal (e.g., a stereo audio) is processed to modify a gain (e.g., a volume or loudness) of a speech component signal (e.g., dialogue spoken by actors in a movie) relative to an ambient component signal (e.g., reflected or reverberated sound) or other component signals. In one aspect, the speech component signal is identified and modified. In one aspect, the speech component signal is identified by assuming that the speech source (e.g., the actor currently speaking) is in the center of a stereo sound image of the plural-channel audio signal and by considering the spectral content of the speech component signal.
Abstract:
An apparatus for processing an audio signal and method thereof are disclosed, by which a local dynamic range of an audio signal can be adaptively normalized as well as a maximum dynamic range of the audio signal. The present invention includes receiving a signal, by an audio processing apparatus; computing a long-term power and a short-term power by estimating power of the signal; generating a slow gain based on the long-term power; generating a fast gain based on the short-term power; obtaining a final gain by combining the slow gain and the fast gain; and, modifying the signal using the final gain.