Abstract:
There is disclosed inter alia an apparatus having means for determining a plurality of spatial audio directional vectors; means for partitioning a vector space of the plurality of spatial audio directional vectors into a plurality of partitions; means for assigning a first spatial audio directional vector to a set of spatial audio directional vectors associated with a first centroid; means for assigning a second spatial audio directional vector to a set of spatial audio directional vectors associated with a further centroid, means for assigning an audio source direction to the set of spatial audio direction vectors associated with the first centroid; and means for assigning a further audio source direction to the set of spatial audio direction vectors associated with the second centroid.
Abstract:
An apparatus for spatial audio signal encoding, the apparatus comprising at least one processor and at least one memory including a computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: determine, for two or more audio signals, at least one spatial audio parameter for providing spatial audio reproduction, the at least one spatial audio parameter comprising a direction parameter with an elevation and an azimuth component; define a spherical grid generated by covering a sphere with smaller spheres, wherein the centres of the smaller spheres define points of the spherical grid; and convert the elevation and azimuth component of the direction parameter to an index value based on the defined spherical grid.
Abstract:
Methods, systems, and computer program products for rending an audio object having an apparent size are disclosed. An audio processing system receives audio panning data including a first grid mapping first virtual sound sources in a space and speaker positions to speaker gains. The first grid specifies first speaker gains of the first virtual sound sources in the space. The audio processing system determines a second grid of second virtual sound sources in the space, including mapping the first virtual sound sources into the second virtual sound sources of the second virtual sources. The audio processing system selects at least one of the first grid or second grid for rendering an audio object based on an apparent size of the audio object. The audio processing system renders the audio object based on the selected grid or grids.
Abstract:
Arrangement (204a, 204b, 204c, 220) for cultivating a spherical harmonic digital representation of a sound scene (102, 103, 229, 230, 301, 401), being configured to obtain the spherical harmonic digital representation (301, 401) of the sound scene, determine through analysis (304, 404, 530, 532, 534, 535) of said spherical harmonic digital representation a number of related spatial parameters (536, 538) indicative of at least dominant sound sources in the sound scene, their directions-of-arrival (DOA) and associated powers, wherein time-frequency decomposition of said spherical harmonic digital representation is preferably utilized to divide the presentation into a plurality of frequency bands analyzed (302, 402), and provide (360) said spherical harmonic digital representation, preferably as divided into said plurality of frequency bands, and said number of spatial parameters to spatial filtering (308, 414) in order to produce an output signal for audio rendering (231, 232, 310, 410) or upmixing (312, 412) the representation to higher order. Corresponding method is presented as well as related arrangements and methods for audio playback or upmixing.
Abstract:
A method comprising: enabling user definition of a search parameter; causing searching of content to find content having the search parameter and to provide the found content having the search parameter as search results; and causing rendering of the search results, using virtual reality, at different positions in a three dimensional space.
Abstract:
The techniques disclosed herein can enable a system to coordinate the processing of object-based audio and channel -based audio generated by multiple applications. The system determines a spatialization technology to utilize based on contextual data. In some configurations, the contextual data can indicate the capabilities of one or more computing resources. In some configurations, the contextual data can also indicate preferences. The preferences, for example, can indicate user preferences for a type of spatialization technology, e.g., Dolby Atmos, over another type of spatialization technology, e.g., DTSX. Based on the contextual data, the system can select a spatialization technology and a corresponding encoder to process the input signals to generate a spatially encoded stream that appropriately renders the audio of multiple applications to an available output device. The techniques disclosed herein also allow a system to dynamically change the spatialization technologies during use.
Abstract:
Accurate modeling of acoustic reverberation can be essential to generating and providing a realistic virtual reality or augmented reality experience for a participant. In an example, a reverberation signal for playback using headphones can be provided. The reverberation signal can correspond to a virtual sound source signal originating at a specified location in a local listener environment. Providing the reverberation signal can include, among other things, using information about a reference impulse response from a reference environment and using characteristic information about reverberation decay in a local environment of the participant. Providing the reverberation signal can further include using information about a relationship between a volume of the reference environment and a volume of the local environment of the participant.
Abstract:
The invention relates to an apparatus (100) for processing soundfield data, the soundfield data defining a soundfield within a spatial reproduction region (101) comprising at least one bright zone (101a) and at least one quiet zone (101b). The apparatus (100) comprises an applicator (103) configured to apply a spatially continuously varying weighting function to the soundfield data in order to obtain weighted soundfield data defining a weighted soundfield, wherein the spatially continuously varying weighting function is configured to enhance the soundfield in the bright zone (101a) and/or the quiet zone (101b).