Abstract:
An earpiece of a headset uses a first signal and a second signal received from an in-ear microphone and an outside microphone, respectively, to enhance microphone signals. The in-ear microphone is positioned at a proximal side of the earpiece with respect to an ear canal of a user, and the outside microphone is positioned at a distal side of the earpiece with respect to the ear canal. A processing unit includes a filter, which digitally filters out in-ear noise from the first signal using the second signal as a reference to produce a de-noised signal to thereby enhance the microphone signals.
Abstract:
A portable electronic device including an ultrasound transmitter, an ultrasound receiver, and a processing unit is provided. The ultrasound transmitter sends ultrasonic signals, while the ultrasound receiver receives reflected ultrasonic signals from an object. The ultrasound transmitter and the ultrasound receiver are disposed to form a reference axis. The processing unit processes the reflected ultrasonic signals to obtain a time-frequency distribution thereof, and determines a 1D gesture corresponding to projection loci of movements of the object on the reference axis according to the time-frequency distribution.
Abstract:
An earpiece of a headset uses a first signal and a second signal received from an in-ear microphone and an outside microphone, respectively, to enhance microphone signals. The in-ear microphone is positioned at a proximal side of the earpiece with respect to an ear canal of a user, and the outside microphone is positioned at a distal side of the earpiece with respect to the ear canal. A processing unit includes a filter, which digitally filters out in-ear noise from the first signal using the second signal as a reference to produce a de-noised signal to thereby enhance the microphone signals.
Abstract:
An audio refocusing method includes receiving an indication signal indicating which sound source in a recorded signal has to be refocused on; determining a direction of the sound source or a location of the sound source; and enhancing sound generated by the sound source in the recorded signal according to the direction or the location of the sound source to generate a processed signal.
Abstract:
A signal loss compensation method, for compensating an input signal comprising (K+Y) lost signal units and normal signal units. The signal loss compensation method comprises: compensating 1st to (K−1)th lost signal units by a first signal loss concealment algorithm to generate a first compensation signal and accordingly generating a first synthetic signal; compensating (K+X+1)th to (K+Y) th lost signal units by a second signal loss concealment algorithm to generate a second compensation signal and accordingly generating a second synthetic signal; compensating K th to (K+X)th lost signal units by the first signal loss concealment algorithm or the second signal loss concealment algorithm to generate a third synthetic signal; generating an output signal according to the first synthetic signal, the second synthetic signal, the third synthetic signal and the normal signal units. K and Y are positive integers, X is a natural number, and Y is larger than X.
Abstract:
A method for performing active noise control upon a target zone includes: using an adaptive filtering circuit to receive at least one microphone signal obtained from a microphone; and, dynamically compensating at least one coefficient of the adaptive filtering circuit to adjust a frequency response of the adaptive filtering circuit according to an energy distribution of the at least one microphone signal, so as to make the adaptive filtering circuit receive the at least one microphone signal to generate a resultant anti-noise signal to the target zone based on the dynamically adjusted frequency response.
Abstract:
An audio synchronization method includes: receiving a first audio signal from a first recording device; receiving a second audio signal from a second recording device; performing a correlation operation upon the first audio signal and the second audio signal to align a first pattern of the first audio signal and the first pattern of the second audio signal; after the first patterns of the first audio signal and the second audio signal are aligned, calculating a difference between a second pattern of the first audio signal and the second pattern of the second audio signal; and obtaining a starting-time difference between the first audio signal and the second audio signal for audio synchronization according to the difference between the second pattern of the first audio signal and the second pattern of the second audio signal.
Abstract:
An active noise control system and associated auto-selection method for modeling a secondary path for the active noise control system are provided. The method includes the steps of: receiving a reference signal; filtering the reference signal with a secondary-path estimation filter to obtain a filtered reference signal, wherein the secondary path estimation filter is determined from a plurality of candidate secondary-path estimation filters; filtering the reference signal with an adaptive filter to provide a compensation signal; sensing a residual noise signal at a listening position of the active noise control system; and adapting filter coefficients of the adaptive filter according to the residual noise signal and the filtered reference signal.
Abstract:
The invention provides a system for speech keyword detection and associated method. The system includes a speech keyword detector, an activity predictor and a decision maker. The activity predictor obtains sensor data provided by a plurality of sensors, and processes the sensor data to provide an activity prediction result indicating a probability for whether a user is about to give voice keyword. The decision maker processes the activity prediction result and a preliminary keyword detection result of the speech keyword detection to provide a keyword detection result.
Abstract:
A voice wakeup method is applied to wake up an electronic apparatus. The voice wakeup method includes executing a speaker identification function to analyze user voice and acquire a predefined identification of the user voice, executing a voiceprint extraction function to acquire a voiceprint segment of the user voice, executing an on-device training function via the voiceprint segment to generate an updated parameter, and utilizing the updated parameter to calibrate a speaker verification model, so that the speaker verification model is used to analyze a wakeup sentence and decide whether to wake up the electronic apparatus.