Abstract:
Apparatus for simultaneously decompressing and interpolating compressed audio data. The compressed audio data is stored in differential log format, meaning that the difference between each two consecutive data points is taken and the log of the difference calculated to form each compressed data point. To efficiently decompress and interpolate the compressed data, advantage is taken of the fact that addition of logs is equivalent to multiplication of linear values. Thus the log of an interpolation factor is added to each compressed data point prior to taking the inverse log of the sum. An integrator block completes the interpolation and decompression of the data.
Abstract:
In this invention noise in a binaural hearing aid is reduced by analyzing the left and right digital audio signals to produce left and right signal frequency domain vectors and thereafter using digital signal encoding techniques to produce a noise reduction gain vector. The gain vector can then be multiplied against the left and right signal vectors to produce a noise reduced left and right signal vector. The cues used in the digital encoding techniques include directionality, short term amplitude deviation from long term average, and pitch. In addition, a multidimensional gain function based on directionality estimate and amplitude deviation estimate is used that is more effective in noise reduction than simply summing the noise reduction results of directionality alone and amplitude deviations alone. As further features of the invention, the noise reduction is scaled based on pitch-estimates and based on voice detection.
Abstract:
This invention relates to a hearing enhancement system having an ear device for each of the wearer's ears, each ear device has a sound transducer, or microphone, and a sound reproducer, or speaker, and associated electronics for the microphone and speaker. Further, the electronic enhancement of the audio signals is performed at a remote digital signal processor (DSP) likely located in a body pack worn somewhere on the body by the user. There is a down-link from each ear device to the (DSP) and an up-link from the DSP to each ear device. The DSP digitally interactively processes the audio signals for each ear based on both of the audio signals received from each ear device. In other words, the enhancement of the audio signal for the left ear is based on the both the right and left audio signals received by the DSP.In addition digital filters implemented at the DSP have a linear phase response so that time relationships at different frequencies are preserved. The digital filters have a magnitude and phase response to compensate for phase distortions due to analog filters in the signal path and due to the resonances and nulls of the ear canal.
Abstract:
A system for vaporizing and optionally decomposing a reagent, such as aqueous ammonia or urea, which is useful for NOx reduction, includes a cyclonic decomposition duct, wherein the duct at its inlet end is connected to an air inlet port and a reagent injection lance. The air inlet port is in a tangential orientation to the central axis of the duct. The system further includes a metering valve for controlling the reagent injection rate. A method for vaporizing and optionally decomposing a reagent includes providing a cyclonic decomposition duct which is connected to an air inlet port and an injection lance, introducing hot gas through the air inlet port in a tangential orientation to the central axis of the duct, injecting the reagent axially through the injection lance into the duct; and adjusting the reagent injection rate through a metering valve.
Abstract:
The present synthesizer generates an underlying spectrum, pitch and loudness for a sound to be synthesized, and then combines the underlying spectrum, pitch and loudness with stored Spectral, Pitch, and Loudness Fluctuations and noise elements. The input to the synthesizer is typically a MIDI stream. A MIDI preprocess block processes the MIDI input and generates the signals needed by the synthesizer to generate output sound phrases. The synthesizer comprises a harmonic synthesizer block (which generates an output representing the tonal audio portion of the output sound), an Underlying Spectrum, Pitch, and Loudness (which takes pitch and loudness and uses stored algorithms to generate the slowly varying portion of the output sound) and a Spectral, Pitch, and Loudness Fluctuation portion (which generates the quickly varying portion of the output sound by selecting and combining Spectral, Pitch, and Loudness Fluctuation segments stored in a database). A specialized analysis process is used to derive the formulas used by the Underlying Spectrum, Pitch, and Loudness and to generate and store the Spectral, Pitch, and Loudness Fluctuation segments stored in the database.
Abstract:
A parametric signal modeling musical tone synthesizer utilizes a multidimensional filter coefficient space consisting of many sets of filter coefficients to model an instrument. These sets are smoothly interpolated over pitch, intensity, and time. The filter excitation for a particular note is derived from a collection of single period excitations, which form a multidimensional excitation space, which is also smoothly interpolated over pitch, intensity and time. The synthesizer includes effective modeling of attacks of tones, and the noise component of a tone is modelled separately from the pitched component. The input control signals may include initial pitch and intensity, or the intensity may be time-varying. A variety of instruments may be specified.
Abstract:
The present synthesizer includes functionality for changing over from a current note to the following notes that results in natural and expressive combinations and transitions. The method of the present invention incorporates an delay (actual, functional, or look ahead) between receiving control data inputs and generating an output sound. This period of delay is used to modify how notes will be played according to control data inputs for later notes. The input to the synthesizer is typically a time-varying MIDI stream, which may be provided by a musician or a MIDI sequencer from stored data. An actual delay occurs when the synthesizer receives a MIDI stream and buffers it while looking ahead for changeovers between notes. A functional delay occurs in a system in which the synthesizer has knowledge of note changeovers ahead of time. A look ahead delay occurs when the synthesizer queries the sequencer for information about the stored sequence ahead of when the synthesizer needs to generate the output for the sequence.
Abstract:
A digital hearing aid according to the present invention is capable of measuring its own performance. The hearing aid includes a test signal generator for feeding a test signal into the hearing aid amplifier. The response to the test signal is acquired at a specific point in the hearing aid, depending upon what aspect of performance is to be measured. Various elements of the hearing aid and/or the hearing aid feedback may be bypassed. The hearing aid further includes the capability of initializing hearing aid parameters based upon the performance measurements. The measurement and initialization capability may be entirely integral to the hearing aid, or an external processor may be used to download the measurement program and the run time program, and assist in computing the parameters.
Abstract:
The present invention describes a device and methods for synthesizing a musical audio signal. The invention includes a device for storing a collection of sound segments taken from idiomatic musical performances. Some of these sound segments include transitions between musical notes such as the slur from the end of one note to the beginning of the next. Much of the complexity and expressivity in musical phrasing is associated with the complex behavior of these transition segments. The invention further includes a device for generating a sequence of sound segments in response to an input control sequence—e.g. a MIDI sequence. The sound segments are associated with musical gesture types. The gesture types include attack, release, transition, and sustain. The sound segments are further associated with musical gesture subtypes. Large upward slur, small upward slur, large downward slur, and small downward slur are examples of subtypes of the transition gesture type. Event patterns in the input control sequence lead to the generation of a sequence of musical gesture types and subtypes, which in turn leads to the selection of a sequence of sound segments. The sound segments are combined to form an audio signal and played out by a sound segment player. The sound segment player pitch-shifts and intensity-shifts the sound segments in response to the input control sequence.
Abstract:
Tonal audio signals can be modeled as a sum of sinusoids with time-varying frequencies, amplitudes, and phases. An efficient encoder and synthesizer of tonal audio signals is disclosed. The encoder determines time-varying frequencies, amplitudes, and, optionally, phases for a restricted number of dominant sinusoid components of the tonal audio signal to form a dominant sinusoid parameter sequence. These components are removed from the tonal audio signal to form a residual tonal signal. The residual tonal signal is encoded using a residual tonal signal encoder (RTSE). In one embodiment, the RTSE generates a vector quantization codebook (VQC) and residual codebook sequence (RCS). The VQC may contain time-domain residual waveforms selected from the residual tonal signal, synthetic time-domain residual waveforms with magnitude spectra related to the residual tonal signal, magnitude spectrum encoding vectors, or a combination of time-domain waveforms and magnitude spectrum encoding vectors. The tonal audio signal synthesizer uses a sinusoidal oscillator bank to synthesize a set of dominant sinusoid components from the dominant sinusoid parameter sequence generated during encoding. In one embodiment, a residual tonal signal is synthesized using a VQC and RCS generated by the RTSE during encoding. If the VQC includes time-domain waveforms, an interpolating residual waveform oscillator may be used to synthesize the residual tonal signal. The synthesized dominant sinusoids and synthesized residual tonal signal are summed to form the synthesized tonal audio signal.