Abstract:
A system, article, and method of automatic speech recognition using parallel processing weighted finite state transducer-based speech decoding.
Abstract:
In a system of an environment-sensitive automatic speech recognition, a method includes steps for obtaining audio data including human speech, determining at least one characteristic of the environment in which the audio data was obtained, and modifying at least one parameter to be used to perform speech recognition depending on the characteristic.
Abstract:
Techniques related to implementing neural networks for speech recognition systems are discussed. Such techniques may include implementing frame skipping with approximated skip frames and/or distances on demand such that only those outputs needed by a speech decoder are provided via the neural network or approximation techniques.
Abstract:
Techniques related to implementing neural networks for speech recognition systems are discussed. Such techniques may include processing a node of the neural network by determining a score for the node as a product of weights and inputs such that the weights are fixed point integer values, applying a correction to the score based a correction value associated with at least one of the weights, and generating an output from the node based on the corrected score.
Abstract:
A language model is modified for a local speech recognition system using remote speech recognition sources. In one example, a speech utterance is received. The speech utterance is sent to at least one remote speech recognition system. Text results corresponding to the utterance are received from the remote speech recognition system. A local text result is generated using local vocabulary. The received text results and the generated text result are compared to determine words that are out of the local vocabulary and the local vocabulary is updated using the out of vocabulary words.
Abstract:
Technologies for improved keyword spotting are disclosed. A compute device may capture speech data from a user of the compute device, and perform automatic speech recognition on the captured speech data. The automatic speech recognition algorithm is configured to both spot keywords as well as provide a full transcription of the captured speech data. The automatic speech recognition algorithm may preferentially match the keywords compared to similar words. The recognized keywords may be used to improve parsing of the transcribed speech data or to improve an assistive agent in holding a dialog with a user of the compute device.
Abstract:
Embodiments of a system and method for adapting a phase difference- based noise reduction system are generally described herein. In some embodiments, spatial information associated with a first and second audio signal are determined, wherein the first and second audio signals including a target audio inside a beam and noise from outside the beam. A signal-to-noise ratio (SNR) associated with the audio signals is estimated. A mapping of phase differences to gain factors is adapted for determination of attenuation factors for attenuating frequency bins associated with noise outside the beam. Spectral subtraction is performed to remove estimated noise from the single-channel signal based on a weighting that affects frequencies associated with a target signal less. Frequency dependent attenuation factors are applied to attenuate frequency bins outside the beam to produce a target signal having noise reduced.