Abstract:
An image decoder includes a processor and a memory. The memory includes instructions configured to cause the processor to perform operations. The operations receive an encoded image, perform a first decoding of the encoded image to generate a first decoded image, store the first decoded image in the memory, process the first decoded image for displaying, perform a second decoding of the first decoded image and generate a second decoded image, and process the second decoded image for displaying.
Abstract:
Techniques and systems are provided for performing predictive random access using a background picture. For example, a method of decoding video data includes obtaining an encoded video bitstream comprising a plurality of pictures. The plurality of pictures include a plurality of predictive random access pictures. A predictive random access picture is at least partially encoded using inter-prediction based on at least one background picture. The method further includes determining, for a time instance of the video bitstream, a predictive random access picture of the plurality of predictive random access pictures with a time stamp closest in time to the time instance. The method further includes determining a background picture associated with the predictive random access picture, and decoding at least a portion of the predictive random access picture using inter-prediction based on the background picture.
Abstract:
Innovations in the areas of generating, parsing, and using metadata that describes nominal lighting conditions of a reference viewing environment for video playback are presented herein. In various examples described herein, metadata includes parameters that describe the nominal lighting conditions ( e.g. , level of ambient light, color characteristics of ambient light) of a reference viewing environment. By conveying a representation of the nominal lighting conditions of the reference viewing environment ( e.g ., one assumed when mastering image content), a transmitter system can enable a receiver system to adapt its local display of the image content. Upon receiving image content and the metadata, the receiver system can identify characteristics of the actual viewing environment, use the metadata to determine whether the actual viewing environment matches the reference viewing environment, and, if not, adjust sample values of the image content, adjust a display device, or adjust lighting conditions of the actual viewing environment.
Abstract:
Video decoder adapted for decoding video based on decoder parameters selected from variable decoder parameters, the decoder comprising an estimator adapted to estimate user viewing experience based on sensor data and comprising a constraint analyzer adapted to analyze constraints when using the decoder parameters, the video decoder further comprising a selector adapted to select said decoder parameters from the variable decoder parameters, wherein the selector is coupled to the estimator and the constraint analyzer.
Abstract:
In some examples, a method for compressing a spectral reflectance dataset may be performed through compression circuitry. The method may include computing a principal component analysis basis for the spectral reflectance dataset; projecting the spectral reflectance dataset onto the principal component analysis basis to obtain a weight matrix; quantizing the weight matrix; performing a Huffman encoding process on the quantized weight matrix to generate a Huffman table and Huffman codes for the quantized weight matrix; and providing compressed spectral reflectance data as the principal component analysis basis, the Huffman table, and the Huffman codes.
Abstract:
Examples of the present disclosure relate to performing subject oriented compression. A content file, such as a video file, may be received. One or more subjects of interest may be identified in the content file. The identified subjects of interest may be associated with a quantization value that is less than a quantization value associated with the rest of the content. When the content is compressed/encoded, the subjects of interest are compressed/encoded using their associated quantization value while the rest of the content is compressed/encoded using a larger quantization value.
Abstract:
Techniques are provided herein for optimizing encoding and decoding operations for video data streams. An encoded video data stream is received, and select image segments of the encoded video data stream are identified. Each of the select image segments is an independently decodable portion of the encoded video data stream. Enhanced layer decoding operations are performed on each of the select image segments of the encoded video data stream to obtain an enhanced decoded output for the select image segments. Base layer decoding operations on each of the select image segments of the encoded video data stream are performed to obtain a base layer decoded output for the select image segments.
Abstract:
L'invention concerne un procédé de codage d'au moins une image courante (IC j ), caractérisé en ce qu'il met en œuvre, au niveau d'un terminal de capture d'images, pour au moins une portion (PO u ) à coder de l'image courante, les étapes consistant à: - déterminer une information relative à la manipulation du terminal par un utilisateur, en relation avec ladite au moins une image courante capturée, - obtenir au moins une donnée relative à l'image courante par transformation de ladite information déterminée, - à partir de ladite donnée obtenue, mettre en œuvre au moins l'une des étapes suivantes : • prédire une information de mouvement associée à ladite au moins une portion de l'image courante, • déterminer au moins une caractéristique d'un mode de codage associé à ladite au moins une portion de l'image courante.