Abstract:
Techniques are disclosed for representing video and images with full resolution color component information and yet remain compatible with legacy processing systems that process images with reduced resolution information. The new image representation may include a scalable format that consists of a base layer, coded to match expectations of a legacy coder. The new image representation also may include additional enhancement layer(s) that enable upconversions of reduced-resolution color components to higher resolutions. The image representation not only provides power savings in decoding full-resolution representations but it also provides other benefits, such as scalable and power aware decoding.
Abstract:
In communication applications, aggregate source image data at a transmitter exceeds the data that is needed to display a rendering of a viewport at a receiver. Improved streaming techniques that include estimating a location of a viewport at a future time. According to such techniques, the viewport may represent a portion of an image from a multi-directional video to be displayed at the future time, and tile(s) of the image may be identified in which the viewport is estimated to be located. In these techniques, the image data of tile(s) in which the viewport is estimated to be located may be requested at a first service tier, and the other tile in which the viewport is not estimated to be located may be requested at a second service tier, lower than the first service tier.
Abstract:
Video coding schemes may include one or more filters to reduce coding artifacts and improve video quality. These filters may be applied to decode video data in a predetermined sequence. The output from one or more of these filters may be selected for different images, blocks, or sets of video data and then copied and/or routed to a display or a buffer storing reference data that is used to decode other video data in a data stream. Providing the ability to select which filter output is used for display and as a reference may result in better video quality for multiple types of video data. The filters that are selected for display and for reference may be different and may vary for different images, blocks, and data sets.
Abstract:
In communication applications, aggregate source image data at a transmitter exceeds the data that is needed to display a rendering of a viewport at a receiver. Improved streaming techniques that include estimating a location of a viewport at a future time. According to such techniques, the viewport may represent a portion of an image from a multi-directional video to be displayed at the future time, and tile(s) of the image may be identified in which the viewport is estimated to be located. In these techniques, the image data of tile(s) in which the viewport is estimated to be located may be requested at a first service tier, and the other tile in which the viewport is not estimated to be located may be requested at a second service tier, lower than the first service tier.
Abstract:
Improved image coding techniques for high-bit depth images includes deriving two or more separate lower-bit-depth main and extension images from one high-bit-depth source image, and then encoding them separately as lower-bit-depth images. At a decoder, the separate main and extension images are decoded, and then combined to create a single reconstructed source image. The improved techniques may use lossless or lossy codecs, and may provide some backward compatibility with legacy decoders.
Abstract:
Improved still image encoding techniques may include generating first items of a log and second items of coded tile-layer image data, where the log enumerates coding quality levels contained in the second items. The second items may include independent-type items, derived-type items of identity variant, and derived-type items with other variant(s). Such encoding techniques may provide for progressive decoding of images, and may provide for spatially variable encoding quality levels.
Abstract:
Techniques are disclosed for generating virtual reference frames that may be used for prediction of input video frames. The virtual reference frames may be derived from already-coded reference frames and thereby incur reduced signaling overhead. Moreover, signaling of virtual reference frames may be avoided until an encoder selects the virtual reference frame as a prediction reference for a current frame. In this manner, the techniques proposed herein contribute to improved coding efficiencies.
Abstract:
A filtering system for video coders and decoders is disclosed that includes a feature detector having an input for samples reconstructed from coded video data representing a color component of source video, and having an output for data identifying a feature recognized therefrom, an offset calculator having an input for the feature identification data from the feature detector and having an output for a filter offset, and a filter having an input for the filter offset from the offset calculator and an input for the reconstructed samples, and having an output for filtered samples. The filtering system is expected to improve operations of video coder/decoder filtering systems by selecting filtering offsets from analysis of recovered video data in a common color plane as the samples that will be filtered.
Abstract:
A method and system for adaptively mixing video components with graphics/UI components, where the video components and graphics/UI components may be of different types, e.g., different dynamic ranges (such as HDR, SDR) and/or color gamut (such as WCG). The mixing may result in a frame optimized for a display device's color space, ambient conditions, viewing distance and angle, etc., while accounting for characteristics of the received data. The methods include receiving video and graphics/UI elements, converting the video to HDR and/or WCG, performing statistical analysis of received data and any additional applicable rendering information, and assembling a video frame with the received components based on the statistical analysis. The assembled video frame may be matched to a color space and displayed. The video data and graphics/UI data may have or be adjusted to have the same white point and/or primaries.
Abstract:
Offset values, such as Sample Adaptive Offset (SAO) values in video coding standards such as the High Efficiency Video Coding standard (HEVC), may be improved by performing calculations and operations that improve the preciseness of these values without materially affecting the signal overhead needed to transmit the more precise values. Such calculations and operations may include applying a quantization factor to a video sample and at least some of its neighbors, comparing the quantized values, and classifying the video sample as a minimum, maximum, or one of various types of edges based on the comparison. Other sample range, offset mode, and/or offset precision parameters may be calculated and transmitted with metadata to improve the precision of offset values.