Abstract:
A method of providing an audio signal comprising spatial information relating to a location of at least one virtual source (202) in a sound field with respect to a first user position comprises obtaining a first audio signal comprising a plurality of signal components, each of the signal components corresponding to a respective one of a plurality of virtual loudspeakers (200a-e) located in the sound field; obtaining an indication of user movement; determining a plurality of panned signal components by applying, in accordance with the indication of user movement, a panning function of a respective order to each of the signal components; and outputting a second audio signal comprising the panned signal components.
Abstract:
Provided are methods and systems for updating a sound field in response to user movement. The methods and systems are less computationally expensive than existing approaches for updating a sound field, and are also suitable for use with arbitrary loudspeaker configurations. The methods and systems provide a dynamic binaural sound field rendering realized with the use of “virtual loudspeakers.” Rather than loudspeaker signals being fed into the physical loudspeakers, the signals are instead filtered with left and right HRIRs (Head Related Impulse Response) corresponding to the spatial locations of these loudspeakers. The sums of the left and right ear signals are then fed into the audio output device of the user.
Abstract:
Provided are methods and systems for delivering three-dimensional, immersive spatial audio to a user over a headphone, where the headphone includes one or more virtual speaker conditions. The methods and systems recreate a naturally sounding sound field at the user's ears, including cues for elevation and depth perception. Among numerous other potential uses and applications, the methods and systems of the present disclosure may be implemented for virtual reality applications.