Abstract:
Embodiments are described for a system of rendering object-based audio content through a system that includes individually addressable drivers, including at least one driver that is configured to project sound waves toward one or more surfaces within a listening environment for reflection to a listening area within the listening environment; a renderer configured to receive and process audio streams and one or more metadata sets associated with each of the audio streams and specifying a playback location of a respective audio stream; and a playback system coupled to the renderer and configured to render the audio streams to a plurality of audio feeds corresponding to the array of audio drivers in accordance with the one or more metadata sets.
Abstract:
Embodiments are directed to an interconnect for coupling components in an object-based rendering system comprising: a first network channel coupling a renderer to an array of individually addressable drivers projecting sound in a listening environment and transmitting audio signals and control data from the renderer to the array, and a second network channel coupling a microphone placed in the listening environment to a calibration component of the renderer and transmitting calibration control signals for acoustic information generated by the microphone to the calibration component. The interconnect is suitable for use in a system for rendering spatial audio content comprising channel-based and object-based audio components.
Abstract:
Embodiments are described for a system of rendering object-based audio content through a system that includes individually addressable drivers, including at least one driver that is configured to project sound waves toward one or more surfaces within a listening environment for reflection to a listening area within the listening environment; a renderer configured to receive and process audio streams and one or more metadata sets associated with each of the audio streams and specifying a playback location of a respective audio stream; and a playback system coupled to the renderer and configured to render the audio streams to a plurality of audio feeds corresponding to the array of audio drivers in accordance with the one or more metadata sets.
Abstract:
Embodiments are described for a system of rendering object-based audio content through a system that includes individually addressable drivers, including at least one driver that is configured to project sound waves toward one or more surfaces within a listening environment for reflection to a listening area within the listening environment; a renderer configured to receive and process audio streams and one or more metadata sets associated with each of the audio streams and specifying a playback location of a respective audio stream; and a playback system coupled to the renderer and configured to render the audio streams to a plurality of audio feeds corresponding to the array of audio drivers in accordance with the one or more metadata sets.
Abstract:
Embodiments are described for a system of rendering object-based audio content through a system that includes individually addressable drivers, including at least one driver that is configured to project sound waves toward one or more surfaces within a listening environment for reflection to a listening area within the listening environment; a renderer configured to receive and process audio streams and one or more metadata sets associated with each of the audio streams and specifying a playback location of a respective audio stream; and a playback system coupled to the renderer and configured to render the audio streams to a plurality of audio feeds corresponding to the array of audio drivers in accordance with the one or more metadata sets.
Abstract:
A system is configured to receive player multimedia having live coverage of physical expressions or actions of a player in an online activity and activity multimedia of live participation in the online activity associated with one or more play accounts. In a near continuous fashion, the system is configured to select in real time, for each time point, one or more player items being portions of the player multimedia or one or more activity items being portions of the activity multimedia corresponding to the time point to form a composite item. The system is configured to further transmit in real time the composite item to one or more viewer accounts. The system is configured to then receive viewer data in response and produce future composite items based on the viewer data.
Abstract:
Embodiments are described for a system of rendering object-based audio content through a system that includes individually addressable drivers, including at least one driver that is configured to project sound waves toward one or more surfaces within a listening environment for reflection to a listening area within the listening environment; a renderer configured to receive and process audio streams and one or more metadata sets associated with each of the audio streams and specifying a playback location of a respective audio stream; and a playback system coupled to the renderer and configured to render the audio streams to a plurality of audio feeds corresponding to the array of audio drivers in accordance with the one or more metadata sets.
Abstract:
Methods and audio processing units for generating an object based audio program including conditional rendering metadata corresponding to at least one object channel of the program, where the conditional rendering metadata is indicative of at least one rendering constraint, based on playback speaker array configuration, which applies to each corresponding object channel, and methods for rendering audio content determined by such a program, including by rendering content of at least one audio channel of the program in a manner compliant with each applicable rendering constraint in response to at least some of the conditional rendering metadata. Rendering of a selected mix of content of the program may provide an immersive experience.
Abstract:
Non-media data relating to real-world objects or persons are collected from a scene while media data from the same scene are collected. The media data comprise audio data only or audiovisual data, whereas the non-media data comprise telemetry data and/or non-telemetry data. Based at least in part on the non-media data relating to the real-world objects or persons in the scene, emitter-listener relationships between a listener and some or all of the real-world objects or persons are determined. Audio objects comprising audio content portions and non-audio data portions are generated. At least one audio object is generated based at least in part on the emitter-listener relationships.
Abstract:
Non-media data relating to real-world objects or persons are collected from a scene while media data from the same scene are collected. The media data comprise audio data only or audiovisual data, whereas the non-media data comprise telemetry data and/or non-telemetry data. Based at least in part on the non-media data relating to the real-world objects or persons in the scene, emitter-listener relationships between a listener and some or all of the real-world objects or persons are determined Audio objects comprising audio content portions and non-audio data portions are generated. At least one audio object is generated based at least in part on the emitter-listener relationships.