Abstract:
Methods and apparatuses are disclosed for streaming audio between a source device and a destination device. An example method may include determining an available bandwidth between the source device and the destination device. The example method may also include determining a bit rate for streaming audio from the source device to the destination device, wherein the bit rate is based on the available bandwidth. The example method may further include determining a preferred audio characteristic for streaming audio from the source device to the destination device, wherein the preferred audio characteristic is based on a user preference. The example method may also include determining encoded audio to be transmitted from the source device to the destination device based on the preferred audio characteristic and the bit rate.
Abstract:
In one example, a device for retrieving audio data includes one or more processors configured to receive availability data representative of a plurality of available adaptation sets, the available adaptation sets including a scene-based audio adaptation set and one or more object-based audio adaptation sets, receive selection data identifying which of the scene-based audio adaptation set and the one or more object-based audio adaptation sets are to be retrieved, and provide instruction data to a streaming client to cause the streaming client to retrieve data for each of the adaptation sets identified by the selection data, and a memory configured to store the retrieved data for the audio adaptation sets.
Abstract:
This disclosure describes techniques for coding of higher-order ambisonics audio data comprising at least one higher-order ambisonic (HOA) coefficient corresponding to a spherical harmonic basis function having an order greater than one. This disclosure describes techniques for adjusting HOA soundfields to potentially improve spatial alignment of the acoustic elements to the visual component in a mixed audio/video reproduction scenario. In one example, a device for rendering an HOA audio signal includes one or more processors configured to render the HOA audio signal over one or more speakers based on one or more field of view (FOV) parameters of a reference screen and one or more FOV parameters of a viewing window.
Abstract:
In general, techniques are described for coding of spherical harmonic coefficients representative of a three dimensional soundfield. A device comprising a memory and one or more processors may be configured to perform the techniques. The memory may be configured to store a plurality of spherical harmonic coefficients. The one or more processors may be configured to perform an energy analysis with respect to the plurality of spherical harmonic coefficients to determine a reduced version of the plurality of spherical harmonic coefficients.
Abstract:
A device comprises one or more processors configured to apply a binaural room impulse response filter to spherical harmonic coefficients representative of a sound field in three dimensions so as to render the sound field.
Abstract:
A device comprising one or more processors is configured to determine a plurality of segments for each of a plurality of binaural room impulse response filters, wherein each of the plurality of binaural room impulse response filters comprises a residual room response segment and at least one direction-dependent segment for which a filter response depends on a location within a sound field; transform each of at least one direction-dependent segment of the plurality of binaural room impulse response filters to a domain corresponding to a domain of a plurality of hierarchical elements to generate a plurality of transformed binaural room impulse response filters, wherein the plurality of hierarchical elements describe a sound field; and perform a fast convolution of the plurality of transformed binaural room impulse response filters and the plurality of hierarchical elements to render the sound field.
Abstract:
In general, techniques are described for transforming spherical harmonic coefficients. A device comprising one or more processors may perform the techniques. The processors may be configured to parse the bitstream to determine transformation information describing how the sound field was transformed to reduce a number of the plurality of hierarchical elements that provide information relevant in describing the sound field. The processors may further be configured to, when reproducing the sound field based on those of the plurality of hierarchical elements that provide information relevant in describing the sound field, transform the sound field based on the transformation information to reverse the transformation performed to reduce the number of the plurality of hierarchical elements.
Abstract:
In general, techniques are described for compensating for loudspeaker positions using hierarchical three-dimensional (3D) audio coding. An apparatus comprising one or more processors may perform the techniques. The processors may be configured to perform a first transform that is based on a spherical wave model on a first set of audio channel information for a first geometry of speakers to generate a first hierarchical set of elements that describes a sound field. The processors may further be configured to perform a second transform in a frequency domain on the first hierarchical set of elements to generate a second set of audio channel information for a second geometry of speakers.
Abstract:
A device and method for backward compatibility for virtual reality (VR), mixed reality (MR), augmented reality (AR), computer vision, and graphics systems. The device and method enable rendering audio data with more degrees of freedom on devices that support fewer degrees of freedom. The device includes memory configured to store audio data representative of a soundfield captured at a plurality of capture locations, metadata that enables the audio data to be rendered to support N degrees of freedom, and adaptation metadata that enables the audio data to be rendered to support M degrees of freedom. The device also includes one or more processors coupled to the memory, and configured to adapt, based on the adaptation metadata, the audio data to provide the M degrees of freedom, and generate speaker feeds based on the adapted audio data.
Abstract:
In general, techniques are described by which to render different portions of audio data using different renderers. A device comprising a memory and one or more processors may be configured to perform the techniques. The memory may store audio renderers. The processor(s) may obtain a first audio renderer of the plurality of audio renderers, and apply the first audio renderer with respect to a first portion of the audio data to obtain one or more first speaker feeds. The processor(s) may next obtain a second audio renderer of the plurality of audio renderers, and apply the second audio renderer with respect to a second portion of the audio data to obtain one or more second speaker feeds. The processor(s) may output, to one or more speakers, the one or more first speaker feeds and the one or more second speaker feeds.