Abstract:
Various embodiments are generally directed to techniques for evaluating the resulting image quality of compression of motion videos as an input to controlling the degree of compression. A device to compress motion video includes a compressor to compress a first uncompressed frame of a motion video to generate a first compressed frame of the motion video for a viewing device having at least one viewing characteristic, and a mean opinion score (MOS) estimator to combine a structural metric of image quality of the first compressed frame and an opinion metric of image quality associated with the at least one viewing characteristic to determine whether to alter a quantization parameter (QP) of the compressor to compress a second uncompressed frame of the motion video. Other embodiments are described and claimed.
Abstract:
A multi-layer or multi-view video (1) is encoded by encoding one of a picture (12) in a first layer or view (10) and a picture (22) in a second layer or view (20) coinciding at a switching point (2) defining a switch between the first layer or view (10) and the second layer or view (10). The other of the picture (12) in the first layer or view (10) and the picture (22) in the second layer or view (20) coinciding at the switching point is encoded as a skip picture. The embodiments thereby reduce complexity of encoding and decoding multi-layer or multi-view video (1) having a switching point (2) and reduce the number of bits required for representing encoded pictures coinciding at the switching point (2).
Abstract:
Techniques described herein are related to harmonizing the signaling of coding modes and filtering in video coding. In one example, a method of decoding video data is provided that includes decoding a first syntax element to determine whether PCM coding mode is used for one or more video blocks, wherein the PCM coding mode refers to a mode that codes pixel values as PCM samples. The method further includes decoding a second syntax element to determine whether in-loop filtering is applied to the one or more video blocks. Responsive to the first syntax element indicating that the PCM coding mode is used, the method further includes applying in-loop filtering to the one or more video blocks based at least in part on the second syntax element and decoding the one or more video blocks based at least in part on the first and second syntax elements.
Abstract:
A video processing system is provided to create quantization data parameters based on human eye attraction to provide to an encoder to enable the encoder to compress data taking into account the human perceptual guidance. The system includes a perceptual video processor (PVP) to generate a perceptual significance pixel map for data to be input to the encoder. Companding is provided to reduce the pixel values to values ranging from zero to one, and decimation is performed to match the pixel values to a spatial resolution of quantization parameter values (QP) values in a look up table (LUT). The LUT table values then provide the metadata to provide to the encoder to enable compression of the original picture to be performed by the encoder in a manner so that bits are allocated to pixels in a macroblock according to the predictions of eye tracking.
Abstract:
A method is described for generating a video stream by starting from a plurality of sequences of 2D and/or 3D video frames, wherein a video stream generator composes into a container video frame video frames coming from N different sources (S 1 , S 2 , S 3 , S N ) and generates a single output video stream of container video frames which is coded by an encoder, wherein said encoder enters into said output video stream a signalling adapted to indicate the structure of the container video frames. A corresponding method for regenerating said video stream is also described.
Abstract:
Methods, techniques, and systems for user interface remoting using video streaming techniques are provided. Example embodiments provide User Interface Remoting and Optimization System ("UIROS"), which enables the efficient remoting of pixel-oriented user interfaces on behalf of their guests using generic video streaming techniques, such as H.264, to send compressed user interface image information in the form of video frame encoded bitstreams. In one embodiment, the UIROS comprises server side support including a UI remoting server, a video encoder, and rendering support and client side support including a UI remoting client, a video decoder, and a display. These components cooperate to implement optimized UI remoting that is bandwidth efficient, low latency and CPU efficient.
Abstract:
Se presenta un sistema para el procesamiento de imágenes digitales que permite ajustar dinámicamente los tiempos de compresión/descompresión a las necesidades de cada escenario de aplicación o industria. El problema que resuelve es el de proporcionar predecibilidad en los tiempos de compresión/descompresión de la imagen, lo que permite fijar ese tiempo de procesamiento independientemente de las características de la imagen y del entorno de operación. El sistema consiste en una tarjeta hardware en la que integran los módulos funcionales necesarios para ejecutar el procesamiento de compresión/descompresión. La tarjeta recibe como entrada el parámetro temporal de ajuste y la secuencia de datos que componen la imagen original o comprimida. Esta tarjeta puede ser incorporada directamente en computadores, o bien, el sistema que describe puede ser implementado con criterios de miniaturización ASIC para incorporarse en otros dispositivos de adquisición como cámaras fotográficas digitales o dispositivos de visualización como monitores o pantallas.