Abstract:
Embodiments are directed to a method of rendering adaptive audio by receiving input audio comprising channel-based audio, audio objects, and dynamic objects, wherein the dynamic objects are classified as sets of low-priority dynamic objects and high-priority dynamic objects, rendering the channel-based audio, the audio objects, and the low-priority dynamic objects in a first rendering processor of an audio processing system, and rendering the high-priority dynamic objects in a second rendering processor of the audio processing system. The rendered audio is then subject to virtualization and post-processing steps for playback through soundbars and other similar limited height capable speakers.
Abstract:
A system and method of modifying a binaural signal using headtracking information. The system calculates a delay, a first filter response, and a second filter response, and applies these to the left and right components of the binaural signal according to the headtracking information. The system may also apply headtracking to parametric binaural signals. In this manner, headtracking may be applied to pre-rendered binaural audio.
Abstract:
A system and method of modifying a binaural signal using headtracking information. The system calculates a delay, a first filter response, and a second filter response, and applies these to the left and right components of the binaural signal according to the headtracking information. The system may also apply headtracking to parametric binaural signals. In this manner, headtracking may be applied to pre-rendered binaural audio.
Abstract:
Embodiments are directed to an interconnect for coupling components in an object-based rendering system comprising: a first network channel coupling a renderer to an array of individually addressable drivers projecting sound in a listening environment and transmitting audio signals and control data from the renderer to the array, and a second network channel coupling a microphone placed in the listening environment to a calibration component of the renderer and transmitting calibration control signals for acoustic information generated by the microphone to the calibration component. The interconnect is suitable for use in a system for rendering spatial audio content comprising channel-based and object-based audio components.
Abstract:
A system and method of modifying a binaural signal using headtracking information. The system calculates a delay, a first filter response, and a second filter response, and applies these to the left and right components of the binaural signal according to the headtracking information. The system may also apply headtracking to parametric binaural signals. In this manner, headtracking may be applied to pre-rendered binaural audio.
Abstract:
An audio processing method may involve receiving media input audio data corresponding to a media stream and headphone microphone input audio data, determining a media audio gain for at least one of a plurality of frequency bands of the media input audio data and determining a headphone microphone audio gain for at least one of a plurality of frequency bands of the headphone microphone input audio data. Determining the headphone microphone audio gain may involve determining a feedback risk control value, for at least one of the plurality of frequency bands, corresponding to a risk of headphone feedback between at least one external microphone of a headphone microphone system and at least one headphone speaker and determining a headphone microphone audio gain that will mitigate actual or potential headphone feedback in at least one of the plurality of frequency bands, based at least partly upon the feedback risk control value.
Abstract:
Media input audio data corresponding to a media stream and microphone input audio data from at least one microphone may be received. A first level of at least one of a plurality of frequency bands of the media input audio data, as well as a second level of at least one of a plurality of frequency bands of the microphone input audio data, may be determined. Media output audio data and microphone output audio data may be produced by adjusting levels of one or more of the first and second plurality of frequency bands based on the perceived loudness of the microphone input audio data, of the microphone output audio data, of the media output audio data and the media input audio data. One or more processes may be modified upon receipt of a mode-switching indication.
Abstract:
Embodiments are directed to a method of rendering adaptive audio by receiving input audio comprising channel-based audio, audio objects, and dynamic objects, wherein the dynamic objects are classified as sets of low-priority dynamic objects and high-priority dynamic objects, rendering the channel-based audio, the audio objects, and the low-priority dynamic objects in a first rendering processor of an audio processing system, and rendering the high-priority dynamic objects in a second rendering processor of the audio processing system. The rendered audio is then subject to virtualization and post-processing steps for playback through soundbars and other similar limited height capable speakers.
Abstract:
Non-media data relating to real-world objects or persons are collected from a scene while media data from the same scene are collected. The media data comprise audio data only or audiovisual data, whereas the non-media data comprise telemetry data and/or non-telemetry data. Based at least in part on the non-media data relating to the real-world objects or persons in the scene, emitter-listener relationships between a listener and some or all of the real-world objects or persons are determined. Audio objects comprising audio content portions and non-audio data portions are generated. At least one audio object is generated based at least in part on the emitter-listener relationships.
Abstract:
Non-media data relating to real-world objects or persons are collected from a scene while media data from the same scene are collected. The media data comprise audio data only or audiovisual data, whereas the non-media data comprise telemetry data and/or non-telemetry data. Based at least in part on the non-media data relating to the real-world objects or persons in the scene, emitter-listener relationships between a listener and some or all of the real-world objects or persons are determined Audio objects comprising audio content portions and non-audio data portions are generated. At least one audio object is generated based at least in part on the emitter-listener relationships.