Abstract:
Spatial auditory cues are produced while a user searches a database for stored information. The spatial auditory cues assist the user in quickly locating stored information by producing sounds that are perceived at specific physical locations in space around the user as the search proceeds. Each location may be associated with different information. Thus, using the techniques disclosed herein, a user can more easily recall stored information by remembering the locations of sound produced by particular spatial auditory cues. The spatial auditory cues may be used in conjunction with a visual search interface. A method of producing auditory cues includes receiving a search action at a user interface included in a device, translating the search action into a spatial auditory cue corresponding to a specific location within a space, and rendering the spatial auditory cue as an audio output signal.
Abstract:
Mid-side (M-S) encoded audio is reproduced by a device that includes a multi-channel digital-to-analog converter (DAC). The DAC has a first channel input receiving a digitized mid audio signal, a first channel output providing an analog mid audio signal, a second channel input receiving a digitized side audio signal and a second channel output providing an analog side audio signal. The DAC may also include a third channel for receiving a digitized second side audio signal. The second side audio signal is phase inverted. The device may be a handheld wireless communication device, such as a cellular phone, and may also include transducers for outputting M-S encoded sound in response to the analog mid and side audio signals.
Abstract:
A mobile audio device (for example, a cellular telephone, personal digital audio player, or MP3 player) performs Audio Dynamic Range Control (ADRC) (125) and Automatic Volume Control (AVC) (126) to increase the volume of sound (127) emitted from a speaker of' the mobile audio device so that faint passages of the audio will be more audible. This amplification of faint passages occurs without overly amplifying other louder passages, and without substantial distortion due to clipping. Multi-Microphone Active Noise Cancellation (MMANC) (133) functionality is, for example, used to remove background noise from audio information picked up on microphones of the mobile audio device. The noise-canceled audio may then be communicated from the device. The MMANC functionality generates a noise reference signal as an intermediate signal. The intermediate signal is conditioned and then used as a reference by the AVC process. The gain applied during the AVC process is a function of the noise reference signal.
Abstract:
Psychoacoustic Bass Enhancement (PBE) is integrated with one or more other audio processing techniques, such as active noise cancellation (ANC), and/or receive voice enhancement (RVE), leveraging each technique to achieve improved audio output. This approach can be advantageous for improving the performance of headset speakers, which often lack adequate low-frequency response to effectively support ANC.
Abstract:
A method for providing an interface to a processing engine that utilizes intelligent audio mixing techniques may include receiving a request to change a perceptual location of an audio source within an audio mixture from a current perceptual location relative to a listener to a new perceptual location relative to the listener. The audio mixture may include at least two audio sources. The method may also include generating one or more control signals that are configured to cause the processing engine to change the perceptual location of the audio source from the current perceptual location to the new perceptual location via separate foreground processing and background processing. The method may also include providing the one or more control signals to the processing engine.
Abstract:
An electronic device for generating a masking signal is described. The electronic device includes a plurality of microphones and a speaker. The electronic device also includes a processor and executable instructions stored in memory that is in electronic communication with the processor. The electronic device obtains a plurality of audio signals from the plurality of microphones. The electronic device also obtains an ambience signal based on the plurality of audio signals. The electronic device further determines an ambience feature based on the ambience signal. Additionally, the electronic device obtains a voice signal based on the plurality of audio signals. The electronic device also determines a voice feature based on the voice signal. The electronic device additionally generates a masking signal based on the voice feature and the ambience feature. The electronic device further outputs the masking signal using the speaker.
Abstract:
A method for blind source separation based spatial filtering on an electronic device includes obtaining a first source audio signal and a second source audio signal. The method also includes applying a blind source separation filter set to the first source audio signal and to the second source audio signal to produce a spatially filtered first audio signal and a spatially filtered second audio signal. The method further includes playing the spatially filtered first audio signal over a first speaker to produce an acoustic spatially filtered first audio signal and playing the spatially filtered second audio signal over a second speaker to produce an acoustic spatially filtered second audio signal. The acoustic spatially filtered first audio signal and the acoustic spatially filtered second audio signal produce an isolated acoustic first source audio signal at a first position and an isolated acoustic second source audio signal at a second position.
Abstract:
Systems, methods, apparatus, and machine-readable media for orientation-sensitive selection and/or preservation of a recording direction using a multi-microphone setup are described.
Abstract:
An original loudness level of an audio signal is maintained for a mobile device while maintaining sound quality as good as possible and protecting the loudspeaker used in the mobile device. The loudness of an audio ( e.g. , speech) signal may be maximized while controlling the excursion of the diaphragm of the loudspeaker (in a mobile device) to stay within the allowed range. In an implementation, the peak excursion is predicted ( e.g. , estimated) using the input signal and an excursion transfer function. The signal may then be modified to limit the excursion and to maximize loudness.