Abstract:
From a plurality of received voice signals, a signal interval in which there is a talker collision between at least a first and a second voice signal is detected. A processor receives a positive detection result and processes, in response to this, at least one of the voice signals with the aim of making it perceptually distinguishable. A mixer mixes the voice signals to supply an output signal, wherein the processed signal(s) replaces the corresponding received signals. In example embodiments, signal content is shifted away from the talker collision in frequency or in time. The invention may be useful in a conferencing system.
Abstract:
A system and method of matching reverberation in teleconferencing environments. When the two ends of a conversation are in environments with differing reverberations, the method filters the reverberation so that when both signals are output at the near end (e.g., the audio signal from the far end and the sidetone from the near end), the reverberations match. In this manner, the user does not perceive an annoying difference in reverberations, and the user experience is improved.
Abstract:
The present document relates to methods and systems for setting up and managing two-dimensional or three-dimensional scenes for audio conferences. A conference controller (111, 175) configured to place a plurality of upstream audio signals (123, 173) associated with a plurality of conference participants within a 2D or 3D conference scene to be rendered to a listener (211) is described. The conference controller (111, 175) is configured to set up a X-point conference scene with X different spatial talker locations (212) within the conference scene; assign the plurality of upstream audio signals (123, 173) to respective ones of the talker locations (212); determine a degree of activity of the plurality of upstream audio signals (123, 173); determine a dominant one of the plurality of upstream audio signals (123, 173); and emphasize the dominant upstream audio signal (123, 173).
Abstract:
The present document relates to methods and systems for setting up and managing two-dimensional or three-dimensional scenes for audio conferences. A conference controller (111, 175) configured to place L upstream audio signals (123, 173) within a 2D or 3D conference scene to be rendered to a listener (211) is described. The conference controller (111, 175) is configured to set up a X-point conference scene; assign L upstream audio signals (123, 173) to X talker locations (212); determine a maximum number N of downstream audio signals (124, 174) to be transmitted to the listener (211); determine N downstream audio signals (124, 174) from the L assigned upstream audio signals (123, 173); determine N updated talker locations for the N downstream audio signals (124, 174); and generate metadata identifying the updated talker locations and enabling an audio processing unit (121, 171) to generate a spatialized audio signal.