Abstract:
Embodiments of client device and method for audio or video conferencing are described. An embodiment includes an offset detecting unit, a configuring unit, an estimator and an output unit. The offset detecting unit detects an offset of speech input to the client device. The configuring unit determines a voice latency from the client device to every far end. The estimator estimates a time when a user at the far end perceives the offset based on the voice latency. The output unit outputs a perceivable signal indicating that a user at the far end perceives the offset based on the time estimated for the far end. The perceivable signal is helpful to avoid collision between parties.
Abstract:
A method of speaker identification for audio content being of a format based on multiple channels is disclosed. The method comprises: extracting, from a first audio clip in the format, a plurality of spatial acoustic features across the multiple channels and location information, the first audio clip containing voices from a speaker (S201), constructing a first model for the speaker based on the spatial acoustic features and the location information, the first model indicating a characteristic of the voices from the speaker (S202), identifying whether the audio content contains voices from the speaker based on the first model (S203). Corresponding system and computer program product are also disclosed.
Abstract:
The present document relates to audio signal processing in general, and to the concealment of artifacts that result from loss of audio packets during audio transmission over a packet-switched network, in particular. A method (200) for concealing one or more consecutive lost packets is described. A lost packet is a packet which is deemed to be lost by a transform-based audio decoder. Each of the one or more lost packets comprises a set of transform coefficients. A set of transform coefficients is used by the transform-based audio decoder to generate a corresponding frame of a time domain audio signal. The method (200) comprises determining (205) for a current lost packet of the one or more lost packets a number of preceding lost packets from the one or more lost packets; wherein the determined number is referred to as a loss position. Furthermore, the method comprises determining a packet loss concealment, referred to as PLC, scheme based on the loss position of the current packet; and determining (204, 207, 208) an estimate of a current frame of the audio signal using the determined PLC scheme (204, 207, 208); wherein the current frame corresponds to the current lost packet.
Abstract:
Various disclosed implementations involve searching the results of an automatic speech recognition (ASR) process, such an ASR process that has been performed on a recording of a monologue or of a conference, such as a teleconference or a video conference. An initial search query, including at least one search word, may be received. The initial search query may be analyzed according to phonetic similarity and semantic similarity. An expanded search query may be determined according to the phonetic similarity, the semantic similarity, or both the phonetic similarity and the semantic similarity. A search of the speech recognition results data may be performed according to the expanded search query. Some aspects of this disclosure involve playing back audio data that corresponds with such search results.
Abstract:
Various disclosed implementations involve processing and/or playback of a recording of a conference involving a plurality of conference participants. Some implementations disclosed herein involve receiving audio data corresponding to a recording of at least one conference involving a plurality of conference participants. The audio data may include conference participant speech data from multiple endpoints, recorded separately and/or conference participant speech data from a single endpoint corresponding to multiple conference participants and including spatial information for each conference participant of the multiple conference participants. A search of the audio data may be based on one or more search parameters. The search may be a concurrent search for multiple features of the audio data. Instances of conference participant speech may be rendered to at least two different virtual conference participant positions of a virtual acoustic space.
Abstract:
Embodiments are described for harmonicity estimation, audio classification, pitch determination and noise estimation. Measuring harmonicity of an audio signal includes calculation a log amplitude spectrum of audio signal. A first spectrum is derived by calculating each component of the first spectrum as a sum of components of the log amplitude spectrum on frequencies. In linear frequency scale, the frequencies are odd multiples of the component's frequency of the first spectrum. A second spectrum is derived by calculating each component of the second spectrum as a sum of components of the log amplitude spectrum on frequencies. In linear frequency scale, the frequencies are even multiples of the component's frequency of the second spectrum. A difference spectrum is derived subtracting the first spectrum from the second spectrum. A measure of harmonicity is generated as a monotonically increasing function of the maximum component of the difference spectrum within predetermined frequency range.
Abstract:
This disclosure falls into the field of voice communication systems, more specifically it is related to the field of voice quality estimation in a packet based voice communication system. In particular the disclosure provides a method and device for 5 reducing a prediction error of the voice quality estimation by considering the content of lost packets. Furthermore, this disclosure provides a method and device which uses a voice quality estimating algorithm to calculate the voice quality estimate based on an input which is switchable between a first and a second input mode.
Abstract:
The present application relates to packet loss concealment apparatus and method, and audio processing system. According to an embodiment, the packet loss concealment apparatus is provided for concealing packet losses in a stream of audio packets, each audio packet comprising at least one audio frame in transmission format comprising at least one monaural component and at least one spatial component. The packet loss concealment apparatus may comprises a first concealment unit for creating the at least one monaural component for a lost frame in a lost packet and a second concealment unit for creating the at least one spatial component for the lost frame. According to the embodiment, spatial artifacts such as incorrect angle and diffuseness may be avoided as far as possible in PLC for multi-channel spatial or sound field encoded audio signals.
Abstract:
An audio processing apparatus and an audio processing method are described. In one embodiment, the audio processing apparatus include an audio masker separator for separating from a first audio signal an audio material comprising a sound other than stationary noise and utterance meaningful in semantics, as an audio masker candidate. The apparatus also includes a first context analyzer for obtaining statistics regarding contextual information of detected audio masker candidates, and a masker library builder for building a masker library or updating an existing masker library by adding, based on the statistics, at least one audio masker candidate as an audio masker into the masker library, wherein audio maskers in the maker library are used to be inserted into a target position in a second audio signal to conceal defects in the second audio signal.
Abstract:
Example embodiments disclosed herein relate to spatial congruency adjustment. A method for adjusting spatial congruency in a video conference is disclosed. The method includes detecting spatial congruency between a visual scene captured by a video endpoint device and an auditory scene captured by an audio endpoint device that is positioned in relation to the video endpoint device, the spatial congruency being a degree of alignment between the auditory scene and the visual scene, comparing the detected spatial congruency with a predefined threshold and in response to the detected spatial congruency being below the threshold, adjusting the spatial congruency. Corresponding system and computer program products are also disclosed.