Abstract:
Embodiments of the present invention relate to video content assisted audio object extraction. A method of audio object extraction from channel-based audio content is disclosed. The method comprises extracting at least one video object from video content associated with the channel-based audio content, and determining information about the at least one video object. The method further comprises extracting from the channel-based audio content an audio object to be rendered as an upmixed audio signal based on the determined information. Corresponding system and computer program product are also disclosed.
Abstract:
A method is disclosed for audio object extraction from an audio content which includes identifying a first set of projection spaces including a first subset for a first channel and a second subset for a second channel of the plurality of channels. The method may further include determining a first set of correlations between the first and second channels, each of the first set of correlations corresponding to one of the first subset of projection spaces and one of the second subset of projection spaces. Still further, the method may include extracting an audio object from an audio signal of the first channel at least in part based on a first correlation among the first set of correlations and the projection space from the first subset corresponding to the first correlation, the first correlation being greater than a first predefined threshold. Corresponding system and computer program products are also disclosed.
Abstract:
Example embodiments disclosed herein relate to audio object clustering. A method for metadata-preserved audio object clustering is disclosed. The method comprises classifying a plurality of audio objects into a number of categories based on information to be preserved in metadata associated with the plurality of audio objects. The method further comprises assigning a predetermined number of clusters to the categories and allocating an audio object in each of the categories to at least one of the clusters according to the assigning. Corresponding system and computer program product are also disclosed.
Abstract:
Example embodiments disclosed herein relate to audio object processing. A method for processing audio content, which includes at least one audio object of a multi-channel format, is disclosed. The method includes generating metadata associated with the audio object, the metadata including at least one of an estimated trajectory of the audio object and an estimated perceptual size of the audio object, the perceptual size being a perceived area of a phantom of the audio object produced by at least two transducers. Corresponding system and computer program product are also disclosed.
Abstract:
Embodiments of the present invention relate to audio object clustering by utilizing temporal variation of audio objects. There is provided a method of estimating temporal variation of an audio object for use in audio object clustering. The method comprises obtaining at least one segment of an audio track associated with the audio object, the at least one segment containing the audio object; estimating variation of the audio object over a time duration of the at least one segment based on at least one property of the audio object and adjusting, at least partially based on the estimated variation of the audio object, a contribution of the audio object to the determination of a centroid in the audio object clustering. Corresponding system and computer program product are disclosed.
Abstract:
Disclosed is a method for determining one or more dialog quality metrics of a mixed audio signal comprising a dialog component and a noise component, the method comprising separating an estimated dialog component from the mixed audio signal by means of a dialog separator using a dialog separating model determined by training the dialog separator based on the one or more quality metrics; providing the estimated dialog component from the dialog separator to a quality metrics estimator; and determining the one or more quality metrics by means of the quality metrics estimator based on the mixed signal and the estimated dialog component. Further disclosed is a method for training a dialog separator, a system comprising circuitry configured to perform the method, and a non-transitory computer-readable storage medium.
Abstract:
Volume leveler controller and controlling method are disclosed. In one embodiment, A volume leveler controller includes an audio content classifier for identifying the content type of an audio signal in real time; and an adjusting unit for adjusting a volume leveler in a continuous manner based on the content type as identified. The adjusting unit may configured to positively correlate the dynamic gain of the volume leveler with informative content types of the audio signal, and negatively correlate the dynamic gain of the volume leveler with interfering content types of the audio signal.
Abstract:
Diffuse or spatially large audio objects may be identified for special processing. A decorrelation process may be performed on audio signals corresponding to the large audio objects to produce decorrelated large audio object audio signals. These decorrelated large audio object audio signals may be associated with object locations, which may be stationary or time-varying locations. For example, the decorrelated large audio object audio signals may be rendered to virtual or actual speaker locations. The output of such a rendering process may be input to a scene simplification process. The decorrelation, associating and/or scene simplification processes may be performed prior to a process of encoding the audio data.
Abstract:
In an embodiment, a method comprises: transforming one or more frames of a two-channel time domain audio signal into a time-frequency domain representation including a plurality of time-frequency tiles, wherein the frequency domain of the time-frequency domain representation includes a plurality of frequency bins grouped into subbands. For each time-frequency tile, the method comprises: calculating spatial parameters and a level for the time-frequency tile; modifying the spatial parameters using shift and squeeze parameters; obtaining a softmask value for each frequency bin using the modified spatial parameters, the level and subband information; and applying the softmask values to the time-frequency tile to generate a modified time-frequency tile of an estimated audio source. In an embodiment, a plurality of frames of the time-frequency tiles are assembled into a plurality of chunks, wherein each chunk includes a plurality of subbands, and the method described above is performed on each subband of each chunk.
Abstract:
Volume leveler controller and controlling method are disclosed. In one embodiment, A volume leveler controller includes an audio content classifier for identifying the content type of an audio signal in real time; and an adjusting unit for adjusting a volume leveler in a continuous manner based on the content type as identified. The adjusting unit may configured to positively correlate the dynamic gain of the volume leveler with informative content types of the audio signal, and negatively correlate the dynamic gain of the volume leveler with interfering content types of the audio signal.