Abstract:
An audio or video conference system operates to receive sound information, sample the sound information and transform each sample of sound information into a sound image representing one or more sound characteristics. Each sound image is applied to the input of a neural network that is trained, using training sound images, to identify different classes of sound, and the output of the neural network is the identity of a class of sound associated with the sound image applied to the neural network. The identity of the sound class can be used to determine how the sample of sound is processed prior to sending it to a remote communication system.
Abstract:
A sound collecting system is provided with a microphone array having a plurality of microphones, a first echo canceller that receives a sound signal from the microphone and removes at least some of an acoustic echo component from the sound signal, a beam forming unit that that forms directivity by processing the partially echo removed sound signal collected from the microphone array, and a second echo canceller disposed after on the back of the beam forming unit that operates to remove the residual acoustic echo in the sound signal.
Abstract:
Systems and methods for adaptive OTA (over-the-air) synchronization of RF (radio frequency) base stations are described herein. Using these systems and methods, base stations in wireless audio systems can automatically identify externally sourced base station clocks and merge overlapping clock system domains, thereby eliminating the need for complex base station management and network configuration.
Abstract:
A video conferencing system has a microphone array that operates to receive acoustic signals corresponding to voice activity and to determine that the signals are either from within a sound field of interest and from outside the sound field of interest, and if the signals are from outside the sound field of interest the system attenuates these signals by not steering a microphone array beam towards these signals.