Abstract:
Speech processing methods may rely on voice activity detection (VAD) that separates speech from noise. Example embodiments of a computationally low complex VAD feature that is robust against various types of noise is introduced. By considering an alternating excitation structure of low and high frequencies, speech is detected with a high confidence. The computationally low complex VAD feature can cope even with the limited spectral resolution that may be typical for a communication system, such as an in-car-communication (ICC) system. Simulation results confirm the robustness of the computationally low complex VAD feature and show an increase in performance relative to established VAD features.
Abstract:
Method and apparatus to determine a speaker activity detection measure from energy-based characteristics of signals from a plurality of speaker-dedicated microphones, detect acoustic events using power spectra for the microphone signals, and determine a robust speaker activity detection measure from the speaker activity measure and the detected acoustic events.
Abstract:
A system for detection of at least one designated wake-up word for at least one speech- enabled application. The system comprises at least one microphone; and at least one computer hardware processor configured to perform: receiving an acoustic signal generated by the at least one microphone at least in part as a result of receiving an utterance spoken by a speaker; obtaining information indicative of the speaker's identity; interpreting the acoustic signal at least in part by determining, using the information indicative of the speaker's identity and automated speech recognition, whether the utterance spoken by the speaker includes the at least one designated wake-up word; and interacting with the speaker based, at least in part, on results of the interpreting.
Abstract:
Methods and systems for deessing of speech signals are described. A deesser of a speech processing system includes an analyzer configured to receive a full spectral envelope for each time frame of a speech signal presented to the speech processing system, and to analyze the full spectral envelope to identify frequency content for deessing. The deesser also includes a compressor configured to receive results from the analyzer and to spectrally weight the speech signal as a function of results of the analyzer. The analyzer can be configured to calculate a psychoacoustic measure from the full spectral envelope, and may be further configured to detect sibilant sounds of the speech signal using the psychoacoustic measure. The psychoacoustic measure can include, for example, a measure of sharpness, and the analyzer may be further configured to calculate deesser weights based on the measure of sharpness. An example application includes in-car communications.
Abstract:
Systems and methods are introduced to perform noise suppression of an audio signal. The audio signal includes foreground speech components and background noise. The foreground speech components correspond to speech from a user's speaking into an audio receiving device. The background noise includes babble noise that includes speech from one or more interfering speakers. A soft speech detector determines, dynamically, a speech detection result indicating a likelihood of a presence of the foreground speech components in the audio signal. The speech detection result is employed to control, dynamically, an amount of attenuation of the noise suppression to reduce the babble noise in the audio signal. Further processing achieves a more stationary background and reduction of musical tones in the audio signal.
Abstract:
A method for speech detection adaptation is provided. Embodiments may include receiving, at a processor, a speech signal corresponding to a particular environment and estimating one or more acoustical parameters that characterize the environment. In some embodiments, the one or more acoustical parameters are not configured to identify a known scenario. Embodiments may include dynamically controlling a speech detector based upon, at least in part, the one or more acoustical parameters, wherein dynamically controlling includes configuring feature parameters and detector parameters.
Abstract:
Methods and apparatus to process microphone signals by a speech enhancement module to generate an audio stream signal including first and second metadata for use by a speech recognition module. In an embodiment, speech recognition is performed using endpointing information including transitioning from a silence state to a maybe speech state, in which data is buffered, based on the first metadata and transitioning to a speech state, in which speech recognition is performed, based upon the second metadata.
Abstract:
An in-car communication (ICC) system has multiple acoustic zones having varying acoustic environments. At least one input microphone within at least one acoustic zone develops a corresponding microphone signal from one or more system users. At least one loudspeaker within at least one acoustic zone provides acoustic audio to the system users. A wind noise module makes a determination of when wind noise is present in the microphone signal and modifies the microphone signal based on the determination.
Abstract:
A low-complexity method and apparatus for detection of voiced speech and pitch estimation is disclosed that is capable of dealing with special constraints given by applications where low latency is required, such as in-car communication (ICC) systems. An example embodiment employs very short frames that may capture only a single excitation impulse of voiced speech in an audio signal. A distance between multiple such impulses, corresponding to a pitch period, may be determined by evaluating phase differences between low-resolution spectra of the very short frames. An example embodiment may perform pitch estimation directly in a frequency domain based on the phase differences and reduce computational complexity by obviating transformation to a time domain to perform the pitch estimation. In an event the phase differences are determined to be substantially linear, an example embodiment enhances voice quality of the voiced speech by applying speech enhancement to the audio signal.
Abstract:
Systems and methods are introduced to perform noise suppression of an audio signal. The audio signal includes foreground speech components and background noise. The foreground speech components correspond to speech from a user's speaking into an audio receiving device. The background noise includes babble noise that includes speech from one or more interfering speakers. A soft speech detector determines, dynamically, a speech detection result indicating a likelihood of a presence of the foreground speech components in the audio signal. The speech detection result is employed to control, dynamically, an amount of attenuation of the noise suppression to reduce the babble noise in the audio signal. Further processing achieves a more stationary background and reduction of musical tones in the audio signal.