Abstract:
Systems and methods are described for determining orientation of an external audio device in a video conference, which may be used to provide congruent multimodal representation for a video conference. A camera of a video conferencing system may be used to detect a potential location of an external audio device within a room in which the video conferencing system is providing a video conference. Within the detected potential location, a visual pattern associated with the external audio device may be identified. Using the identified visual pattern, the video conferencing system may estimate an orientation of the external audio device, the orientation being used by the video conferencing system to provide spatial audio video congruence to a far end audience.
Abstract:
In an audio conferencing environment, including multiple users participating by means of a series of associated audio input devices for the provision of audio input, and a series of audio output devices for the output of audio output streams to the multiple users, with the audio input and output devices being interconnected to a mixing control server for the control and mixing of the audio inputs from each audio input devices to present a series of audio streams to the audio output devices, a method of reducing the effects of cross talk pickup of at least a first audio conversation by multiple audio input devices, the method including the steps of: (a) monitoring the series of audio input devices for the presence of a duplicate audio conversation input from at least two input audio sources in an audio output stream; and (b) where a duplicate audio conversation input is detected, suppressing the presence of the duplicate audio conversation input in the audio output stream.
Abstract:
A method, an apparatus, and a computer-readable medium configured with instructions that when executed carry out the method for determining a measure of harmonicity. In one embodiment the method includes selecting candidate fundamental frequencies within a range, and for candidate determining a mask or retrieving a pre-calculated mask that has positive value for each frequency that contributed to harmonicity, and negative value for each frequency that contributes to inharmonicity. A candidate harmonicity measure is calculated for each candidate fundamental by summing the product of the mask and the magnitude measure spectrum. The harmonicity measure is selected as the maximum of the candidate harmonicity measures.
Abstract:
In some embodiments, a method for adaptive control of gain applied to an audio signal, including steps of analyzing segments of the signal to identify audio objects (e.g., voices of participants in a voice conference); storing information regarding each distinct identified object; using at least some of the information to determine at least one of a target gain, or a gain change rate for reaching a target gain, for each identified object; and applying gain to segments of the signal indicative of an identified object such that the gain changes (typically, at the gain change rate for the object) from an initial gain to the target gain for the object. The information stored may include a scene description. Aspects of the invention include a system configured (e.g., programmed) to per form any embodiment of the inventive method.
Abstract:
A method, an apparatus, and a computer-readable medium configured with instructions that when executed carry out the method for determining a measure of harmonicity. In one embodiment the method includes selecting candidate fundamental frequencies within a range, and for candidate determining a mask or retrieving a pre-calculated mask that has positive value for each frequency that contributed to harmonicity, and negative value for each frequency that contributes to inharmonicity. A candidate harmonicity measure is calculated for each candidate fundamental by summing the product of the mask and the magnitude measure spectrum. The harmonicity measure is selected as the maximum of the candidate harmonicity measures.
Abstract:
An audio signal with a temporal sequence of blocks or frames is received or accessed. Features are determined as characterizing aggregately the sequential audio blocks/frames that have been processed recently, relative to current time. The feature determination exceeds a specificity criterion and is delayed, relative to the recently processed audio blocks/frames. Voice activity indication is detected in the audio signal. VAD is based on a decision that exceeds a preset sensitivity threshold and is computed over a brief time period, relative to blocks/frames duration, and relates to current block/frame features. The VAD and the recent feature determination are combined with state related information, which is based on a history of previous feature determinations that are compiled from multiple features, determined over a time prior to the recent feature determination time period. Decisions to commence or terminate the audio signal, or related gains, are outputted based on the combination.
Abstract:
Methods and systems for performing at least one audio activity (e.g., conducting a phone call or playing music or other audio content) in an environment including by determining an estimated location of a user in the environment in response to sound uttered by the user (e.g., a voice command), and controlling the audio activity in response to determining the estimated user location. The environment may have zones which are indicated by a zone map and estimation of the user location may include estimating in which of the zones the user is located. The audio activity may be performed using microphones and loudspeakers which are implemented in or coupled to smart audio devices.
Abstract:
An audio processing method may involve receiving output signals from each microphone of a plurality of microphones in an audio environment, the output signals corresponding to a current utterance of a person and determining, based on the output signals, one or more aspects of context information relating to the person, including an estimated current proximity of the person to one or more microphone locations. The method may involve selecting two or more loudspeaker-equipped audio devices based, at least in part, on the one or more aspects of the context information, determining one or more types of audio processing changes to apply to audio data being rendered to loudspeaker feed signals for the audio devices and causing one or more types of audio processing changes to be applied. In some examples, the audio processing changes have the effect of increasing a speech to echo ratio at one or more microphones.
Abstract:
An audio session management method may involve: determining, by an audio session manager, one or more first media engine capabilities of a first media engine of a first smart audio device, the first media engine being configured for managing one or more audio media streams received by the first smart audio device and for performing first smart audio device signal processing for the one or more audio media streams according to a first media engine sample clock; receiving, by the audio session manager and via a first application communication link, first application control signals from the first application; and controlling the first smart audio device according to the first media engine capabilities, by the audio session manager, via first audio session management control signals transmitted to the first smart audio device via a first smart audio device communication link and without reference to the first media engine sample clock.
Abstract:
Methods and systems for performing at least one audio activity (e.g., conducting a phone call or playing music or other audio content) in an environment including by determining an estimated location of a user in the environment in response to sound uttered by the user (e.g., a voice command), and controlling the audio activity in response to determining the estimated user location. The environment may have zones which are indicated by a zone map and estimation of the user location may include estimating in which of the zones the user is located. The audio activity may be performed using microphones and loudspeakers which are implemented in or coupled to smart audio devices.