Abstract:
Some implementations involve receiving, from a first subband domain acoustic echo canceller (AEC) of a first audio device in an audio environment, first adaptive filter management data from each of a plurality of first adaptive filter management modules, each first adaptive filter management module corresponding to a subband of the first subband domain AEC, each first adaptive filter management module being configured to control a first plurality of adaptive filters. The first plurality of adaptive filters may include at least a first adaptive filter type and a second adaptive filter type. Some implementations involve extracting, from the first adaptive filter management data, a first plurality of extracted features corresponding to a plurality of subbands of the first subband domain AEC and estimating a current local acoustic state based, at least in part, on the first plurality of extracted features.
Abstract:
Methods and systems for performing at least one audio activity (e.g., conducting a phone call or playing music or other audio content) in an environment including by determining an estimated location of a user in the environment in response to sound uttered by the user (e.g., a voice command), and controlling the audio activity in response to determining the estimated user location. The environment may have zones which are indicated by a zone map and estimation of the user location may include estimating in which of the zones the user is located. The audio activity may be performed using microphones and loudspeakers which are implemented in or coupled to smart audio devices.
Abstract:
A method for interactive and user guided manipulation of multichannel audio content, the method including the steps of: providing a content preview facility for replay and review of multichannel audio content by a user; providing a user interface for the user selection of a segment of multichannel audio content having an unsatisfactory audio content; processing the audio content to include associated audio object activity spatial or signal space regions, to create a time line of activity where one or more spatial or signal space regions are active at any given time; matching the user's gesture input against at least one of the active spatial or signal space regions; signal processing the audio emanating from selected active spatial or signal space region using a number of differing techniques to determine at least one processed alternative; providing the user with an interactive playback facility to listen to the processed alternative.
Abstract:
A method in a soundfield-capturing endpoint and the capturing endpoint that comprises a microphone array capturing soundfield, and an input processor pre-processing and performing auditory scene analysis to detect local sound objects and positions, de-clutter the sound objects, and integrate with auxiliary audio signals to form a de-cluttered local auditory scene that has a measure of plausibility and perceptual continuity. The input processor also codes the resulting de-cluttered auditory scene to form coded scene data comprising mono audio and additional scene data to send to others. The endpoint includes an output processor generating signals for a display unit that displays a summary of the de-cluttered local auditory scene and/or a summary of activity in the communication system from received data, the display including a shaped ribbon display element that has an extent with locations on the extent representing locations and other properties of different sound objects.
Abstract:
In one embodiment, a sound field is mapped by extracting spatial angle information, diffusivity information, and optionally, sound level information. The extracted information is mapped for representation in the form of a Riemann sphere, wherein spatial angle varies longitudinally, diffusivity varies latitudinally, and level varies radially along the sphere. A more generalized mapping employs mapping the spatial angle and diffusivity information onto a representative region exhibiting variations in direction of arrival that correspond to the extracted spatial information and variations in distance that correspond to the extracted diffusivity information.
Abstract:
An audio signal with a temporal sequence of blocks or frames is received or accessed. Features are determined as characterizing aggregately the sequential audio blocks/frames that have been processed recently, relative to current time. The feature determination exceeds a specificity criterion and is delayed, relative to the recently processed audio blocks/frames. Voice activity indication is detected in the audio signal. VAD is based on a decision that exceeds a preset sensitivity threshold and is computed over a brief time period, relative to blocks/frames duration, and relates to current block/frame features. The VAD and the recent feature determination are combined with state related information, which is based on a history of previous feature determinations that are compiled from multiple features, determined over a time prior to the recent feature determination time period. Decisions to commence or terminate the audio signal, or related gains, are outputted based on the combination.
Abstract:
Method for measuring level of speech determined by an audio signal in a manner which corrects for and reduces the effect of modification of the signal by the addition of noise thereto and/or amplitude compression thereof, and a system configured to perform any embodiment of the method. In some embodiments, the method includes steps of generating frequency banded, frequency-domain data indicative of an input speech signal, determining from the data a Gaussian parametric spectral model of the speech signal, and determining from the parametric spectral model an estimated mean speech level and a standard deviation value for each frequency band of the data; and generating speech level data indicative of a bias corrected mean speech level for each frequency band, including using at least one correction value to correct the estimated mean speech level for the frequency band, where each correction value has been predetermined using a reference speech model.
Abstract:
An apparatus and method of transmission control for an audio device. The audio device uses sources other than the microphone to determine nuisance, and uses this to calculate a gain as well as to make the transmit decision. Using the gain results in a more nuanced nuisance mitigation than using the transmit decision on its own.
Abstract:
Some disclosed teleconferencing methods may involve detecting a howl state during a teleconference. The teleconference may involve two or more teleconference client locations and a teleconference server. The teleconference server may be configured for providing full-duplex audio connectivity between the teleconference client locations. The howl state may be a state of acoustic feedback involving two or more teleconference devices in a teleconference client location. Detecting the howl state may involve an analysis of both spectral and temporal characteristics of teleconference audio data. Some disclosed teleconferencing methods may involve determining which client location is causing the howl state. Some such methods may involve mitigating the howl state and/or sending a howl state detection message.
Abstract:
A method of processing a series of microphone inputs of an audio conference, the method including the steps of: (a) conducting a spatial analysis and feature extraction of the audio conference based on current audio activity; (b) aggregating historical information to obtain information about the approximate relative location of recent sound objects relative to the series of microphone inputs; (c) utilizing the relative location or distance of the sound objects from the series of microphone inputs to determine if beam forming should be utilized to enhance the audio reception from recent sound objects.