Abstract:
A method for processing an audio signal executed by an audio echo suppression apparatus, the method including: receiving, at the audio echo suppression apparatus, the audio signal; generating a subband signal from the audio signal; delaying, at the audio echo suppression apparatus, the subband signal with a plurality of different delay values to form a plurality of time lag signals; multiplying, at the audio echo suppression apparatus, the plurality of time lag signals with first respective filter coefficients to generate a first signal; calculating, at the audio echo suppression apparatus, a complex product between pairs of the plurality of time lag signals to generate complex product signals; multiplying, at the audio echo suppression apparatus, each of a real part and imaginary part of the complex product signals with second respective filter coefficients, and taking a sum thereof, to generate a second signal; and estimating an echo subband signal from the first signal and the second signal.
Abstract:
A method and system for determining a direction between a detection point, e.g. at a camera in a video conference equipment, and an acoustic source, e.g. an active speaker participating in a video conference. The method comprises receiving acoustic signals originating from the acoustic source at a first and second pair of microphone elements, arranged symmetrically about the detection point; calculating a first cross correlation signal from the first pair of microphone elements; and calculating a second cross correlation of signals from the second pair of microphone elements. The direction is then calculated based on both the first and second cross correlation signals, e.g. by convolution. Further symmetrically arranged pairs of microphone elements may also be used.
Abstract:
The present invention discloses a method providing removal of the distractive illuminative flickering due to discrepancy between line frequency, screen updating rate and camera exposure time in video conferencing and video recording by primarily adjusting the screen updating rate and secondary the camera exposure time to achieve a flickering free experience of the video captured by the camera.
Abstract:
The present invention introduces a novel system and novel methods for allowing an individual using an IP communication device to contact an individual in an external organization using a single identifier, such as a telephone number, without the caller having any knowledge of the IP capabilities of the called individual. This is utilised by communicating with a registry device, e.g. a Global Address Database (GAD). The GAD contains contact information, such as IP capabilities, for registered numbers. When trying to make a call to a given number, a call server or similar at the site at the caller side, requests contact information relating to that number from the GAD, then sets up a call based on the information received from the GAD.
Abstract:
A process for calculating run-and-level representations of quantized transform coefficients includes packing each quantized transform coefficients in a value interval (Max, Min) by setting all quantized transform coefficients greater than Max equal to Max, and all quantized transform coefficients less than Min equal to Min; reordering the quantized transform coefficients resulting in an array C of reordered quantized transform coefficients; masking C by generating an array M containing ones in positions corresponding to positions of C having non-zero values, and zeros in positions corresponding to positions of C having zero values; and for each position containing a one in M, generating a run and a level representation by setting the level value equal to an occurring value in a corresponding position C, and setting the run value equal to the number of proceeding positions relative to a current position in M since a previous occurrence of one in M.
Abstract:
The present invention is related to an implementation of a filter process in compression/decompression of digital video systems in multi-purpose processors. It provides a method significantly reducing the number of memory loads/stores and address computations. To achieve known memory behavior within a loop, one filter is implemented for each resolution. Prior to filtering it is determined which format the video content is adapted to, and the width and height of each frame is known before executing the loop, and consequently also the memory address of the pixel values to be filtered. For a given resolution the exact memory location of each pixel in the frame is known in compile-time and need not be calculated on the fly i.e. in each loop of the filter function. Thus loading and storing becomes extremely efficient. This will in particular boost the in-order execution engines that cannot reorder the instructions in the executable on the fly, but it may also lead to dramatic speedup for out-of-order execution processors that thus may find many more independent instructions that can be executed simultaneously.
Abstract:
The present invention relates to video conferencing systems and telepresence. More specifically, the invention introduces a novel method of setting up communication sessions in a telepresence call comprising of a multiple point-to-point connections between at least two telepresence systems, wherein the information required for setting up the communication sessions is embedded in a control protocol message flow establishing a first communication session between the two telepresence systems.
Abstract:
A method is disclosed for processing images during a conference between a plurality of video conferencing terminals. The method comprises providing a first image from a first camera at a first video conferencing terminal; providing a second image from a second camera at a second video conferencing terminal; and providing a third image from a third camera at a third video conferencing terminal. Further, the method comprises generating a first composite image, including inserting said third image at a first position into said second image; and generating a second composite image, including inserting said third image at a second position into said first image. The first and second positions are located in horizontally opposite portions of the first and second composite images, respectively. Further, the first and second composite images are supplied to be displayed on a display of the first and second video conferencing terminals, respectively. The method improves telepresence experience when a regular video conferencing terminal is connected to the conference.
Abstract:
The present invention provides a method and a device for lossless coding of event tables with dynamically matching VLC tables. The most probable event should be assigned the shortest code and the respective events should have increasing code length as the associated probability of occurrence decrease. The present invention takes into account that the probability distribution of the event table may not be stable throughout the different parts of a video sequence. Each time an event has occurred, this event is moved one position up in the event table. The present invention results in more efficient coding of digital compressed video by dynamically reordering event tables to obtain a better match between event probabilities and VLC code words. This is particularly useful when coding video with light and color conditions temporarily or constantly differing from the expected conditions from which static VLCs are derived.
Abstract:
The present invention is related to video and still-image compression systems. It addresses the problem of the occur-rence blurry edges in images exposed to conventional decod-ing and coding process. The present invention provides a method allowing some 'leakage' of edges and high-frequency content from the full-resolution luma-channel into the low-resolution chroma channels. It is adjusted to operate on parts of a image to be decoded (blocks, lines etc), to in-vestigate the conformity between the available decimated chroma-information with decimated luma-information. If a good fit can be found, i.e. appropriate parameters can be determined to express decimated chroma values with deci-mated luma values, those parameters are applied on the full-resolution luma values for obtaining estimated full-resolution chroma values instead of interpolation. If a good fit cannot be found, the full-resolution chroma values will gradually deteriorate to a standard fallback method interpolation method. This process is repeated for the en-tire picture and for each chroma-channel.