Abstract:
A speech recognition system for resolving impaired utterances can have a speech recognition engine configured to receive a plurality of representations of an utterance and concurrently to determine a plurality of highest-likelihood transcription candidates corresponding to each respective representation of the utterance. The recognition system can also have a selector configured to determine a most-likely accurate transcription from among the transcription candidates. As but one example, the plurality of representations of the utterance can be acquired by a microphone array, and beamforming techniques can generate independent streams of the utterance across various look directions using output from the microphone array.
Abstract:
Systems and methods for controlling echo in audio communications between a near-end system and a far-end system are described. The system and method may intelligently assign a plurality of microphone beams to a limited number of echo cancellers for processing. The microphone beams may be classified based on generated statistics to determine beams of interest (e.g., beams with a high ratio of local-voice to echo). Based on this ranking/classification of microphone beams, beams of greater interest may be assigned to echo cancellers while less important beams may temporally remain unprocessed until these beams become of higher importance/interest. Accordingly, a limited number of echo cancellers may be used to intelligently process a larger number of microphone beams based on interest in the beams and properties of echo cancellation performed for each beam.
Abstract:
System of improving sound quality includes loudspeaker, microphone, accelerometer, acoustic-echo-cancellers (AEC), and double-talk detector (DTD). Loudspeaker outputs loudspeaker signal including downlink audio signal from far-end speaker. Microphone generates microphone uplink signal and receives at least one of: near-end speaker, ambient noise, and loudspeaker signals. Accelerometer generates accelerometer-uplink signal and receives at least one of: near-end speaker, ambient noise, and loudspeaker signals. First AEC receives downlink audio, microphone-uplink and double talk control signals, and generates AEC-microphone linear echo estimate and corrected AEC-microphone uplink signal. Second AEC receives downlink audio, accelerometer uplink and double talk control signals, and generates AEC-accelerometer linear echo estimate and corrected AEC-accelerometer uplink signal. DTD receives downlink audio signal, uplink signals, corrected uplink signals, linear echo estimates, and generates double-talk control signal. Uplink audio signal including at least one of corrected microphone-uplink signal and corrected accelerometer-uplink signal is generated. Other embodiments are described.
Abstract:
A method performed a local device that is communicatively coupled with several remote devices, the method includes: receiving, from each remote device with which the local device is engaged in a communication session, an input audio stream; receiving, for each remote device, a set parameters; determining, for each input audio stream, whether the input audio stream is to be 1) rendered individually or 2) rendered as a mix of input audio streams based on the set of parameters; for each input audio stream that is determined to be rendered individually, spatial rendering the input audio stream as an individual virtual sound source that contains only that input audio stream; and for input audio streams that are determined to be rendered as the mix of input audio streams, spatial rendering the mix of input audio streams as a single virtual sound source that contains the mix of input audio streams.
Abstract:
A method performed by a processor of an electronic device. The method presents a computer-generated reality (CGR) setting including a first user and several other users. The method obtains, from a microphone, an audio signal that contains speech of the first user. The method obtains, from a sensor, sensor data that represents a physical characteristic of the first user. The method determines, based on the sensor data, whether to initiate a private conversation between the first user and a second user of the other users, and in accordance with a determination to initiate the private conversation, initiates the private conversation by providing the audio signal to the second user.
Abstract:
A first device obtains, from the array, several audio signals and processes the audio signals to produce a speech signal and one or more ambient signals. The first device processes the ambient signals to produce a sound-object sonic descriptor that has metadata describing a sound object within an acoustic environment. The first device transmits, over a communication data link, the speech signal and the descriptor to a second electronic device that is configured to spatially reproduce the sound object using the descriptor mixed with the speech signal, to produce several mixed signals to drive several speakers.
Abstract:
A method performed a local device that is communicatively coupled with several remote devices, the method includes: receiving, from each remote device with which the local device is engaged in a communication session, an input audio stream; receiving, for each remote device, a set parameters; determining, for each input audio stream, whether the input audio stream is to be 1) rendered individually or 2) rendered as a mix of input audio streams based on the set of parameters; for each input audio stream that is determined to be rendered individually, spatial rendering the input audio stream as an individual virtual sound source that contains only that input audio stream; and for input audio streams that are determined to be rendered as the mix of input audio streams, spatial rendering the mix of input audio streams as a single virtual sound source that contains the mix of input audio streams.
Abstract:
A first device obtains, from the array, several audio signals and processes the audio signals to produce a speech signal and one or more ambient signals. The first device processes the ambient signals to produce a sound-object sonic descriptor that has metadata describing a sound object within an acoustic environment. The first device transmits, over a communication data link, the speech signal and the descriptor to a second electronic device that is configured to spatially reproduce the sound object using the descriptor mixed with the speech signal, to produce several mixed signals to drive several speakers.
Abstract:
Several embodiments of a digital speech signal enhancer are described that use an artificial neural network that produces clean speech coding parameters based on noisy speech coding parameters as its input features. A vocoder parameter generator produces the noisy speech coding parameters from a noisy speech signal. A vocoder model generator processes the clean speech coding parameters into estimated clean speech spectral magnitudes. In one embodiment, a magnitude modifier modifies an original frequency spectrum of the noisy speech signal using the estimated clean speech spectral magnitudes, to produce an enhanced frequency spectrum, and a synthesis block converts the enhanced frequency spectrum into time domain, as an output speech sequence. Other embodiments are also described.
Abstract:
An audio system includes one or more loudspeaker cabinets, each having loudspeakers. The system outputs an omnidirectional sound pattern to determine the acoustic environment. Sensing logic determines an acoustic environment of the loudspeaker cabinets. The sensing logic may include an echo canceller. A playback mode processor adjusts an audio program according to a playback mode determined from the acoustic environment of the audio system. The system may produce a directional pattern superimposed on an omnidirectional pattern, if the acoustic environment is in free space. The system may aim ambient content toward a wall and direct content away from the wall, if the acoustic environment is not in free space. The sensing logic automatically determines the acoustic environment upon initial power up and when position changes of loudspeaker cabinets are detected. Accelerometers may detect position changes of the loudspeaker cabinets.