Abstract:
Embodiments are disclosed for hybrid near/far-field speaker virtualization. In an embodiment, a method comprises: receiving a source signal including channel-based audio or audio objects; generating near-field gain(s) and far-field gain(s) based on the source signal and a blending mode; generating a far-field signal based, at least in part, on the source signal and the far-field gain(s); rendering, using a speaker virtualizer, the far-field signal for playback of far-field acoustic audio through far-field speakers into an audio reproduction environment; generating a near-field signal based at least in part on the source signal and the near-field gain(s); prior to providing the far-field signal to the far-field speakers, sending the near-field signal to a near-field playback device or an intermediate device coupled to the near-field playback device; providing the far-field signal to the far-field speakers; and providing the near-field signal to the near-field speakers to synchronously overlay the far-field acoustic audio.
Abstract:
Control data templates are generated independent of a plurality of audio elements based on user input. The user input relates to parameter values and control inputs for operations. In response to receiving audio elements after the control data templates are generated, audio objects are generated to store audio sample data representing the audio elements. Control data is generated based on the parameter values and the control inputs for the operations in the control data templates. The control data specifies the operations to be performed while rendering the audio objects. The control data is then stored separately from the audio sample data in the audio objects. The audio objects can be communicated to downstream recipient devices for rendering and/or remixing.
Abstract:
An apparatus may include a housing adapted for at least partial insertion into a concha bowl of a human ear, at least one speaker residing in or on the housing, a control system residing in or on the housing and a positioning element attached to the housing. The control system may be configured for controlling the speaker and configured for radio frequency (RF) communication. The positioning element may be configured to fit at least partially inside a concha of the human ear and may be configured to retain the housing at least partially within the concha bowl. The positioning element may include one or more wires Control System configured for communication with the control system. The one or more wires may be configured for at receiving and/or transmitting RF radiation. In some examples, the positioning element may be, or may include, a concha lock. The positioning element may include a loop antenna.
Abstract:
Embodiments are disclosed for hybrid near/far-field speaker virtualization. In an embodiment, a method comprises: receiving a source signal including channel-based audio or audio objects; generating near-field gain(s) and far-field gain(s) based on the source signal and a blending mode; generating a far-field signal based, at least in part, on the source signal and the far-field gain(s); rendering, using a speaker virtualizer, the far-field signal for playback of far-field acoustic audio through far-field speakers into an audio reproduction environment; generating a near-field signal based at least in part on the source signal and the near-field gain(s); prior to providing the far-field signal to the far-field speakers, sending the near-field signal to a near-field playback device or an intermediate device coupled to the near-field playback device; providing the far-field signal to the far-field speakers; and providing the near-field signal to the near-field speakers to synchronously overlay the far-field acoustic audio.
Abstract:
Control data templates are generated independent of a plurality of audio elements based on user input. The user input relates to parameter values and control inputs for operations. In response to receiving audio elements after the control data templates are generated, audio objects are generated to store audio sample data representing the audio elements. Control data is generated based on the parameter values and the control inputs for the operations in the control data templates. The control data specifies the operations to be performed while rendering the audio objects. The control data is then stored separately from the audio sample data in the audio objects. The audio objects can be communicated to downstream recipient devices for rendering and/or remixing.