Abstract:
In a method to improve the coding efficiency of high-dynamic range (HDR) images, a decoder parses sequence processing set (SPS) data from an input coded bitstream to detect that an HDR extension syntax structure is present in the parsed SPS data. It extracts from the HDR extension syntax structure post-processing information that includes one or more of a color space enabled flag, a color enhancement enabled flag, an adaptive reshaping enabled flag, a dynamic range conversion flag, a color correction enabled flag, or an SDR viewable flag. It decodes the input bitstream to generate a preliminary output decoded signal, and generates a second output signal based on the preliminary output signal and the post-processing information.
Abstract:
Given existing color remapping information (CRI) messaging variables, methods are described to communicate color volume information for a targeted display to a downstream receiver. Bits 7:0 of the 32-bit colour_remap_id are used to extract a first value. If the first value is not a reserved value, then the first value is used as an index to a look-up table to generate a first luminance value for a targeted display, otherwise a second value is generated based on bits 31:9 in the colour_remap_id messaging variable and the first luminance value for a targeted display is generated based on the second value. The methods may be applied to communicate via CRI messaging a minimum luminance value, a maximum luminance value, and color primaries information of the targeted display.
Abstract:
Methods, processes, and systems are presented for adaptive loop filtering in coding and decoding high dynamic range (HDR) video. Given an input image block, its luminance information may be used to adapt one or more parameters of adaptive loop filtering and compute gradient and directionality information, activity information, a classification index, and adaptive-loop-filtering coefficients.
Abstract:
An intermediate bitstream generated by a first-stage transcoding system from an initial transmission package is received. The intermediate bitstream comprises base layer (BL) and enhancement layer (EL) signals. The combination of the BL and EL signals of the intermediate bitstream represents compressed wide dynamic range images. The BL signal of the intermediate bitstream alone represents compressed standard dynamic range images. A targeted transmission package is generated based on the intermediate bitstream. The targeted transmission package comprises BL and EL signals. The BL signal of the targeted transmission package may be directly transcoded from the BL signal of the intermediate bitstream alone.
Abstract:
A high resolution 3D image may be encoded into a first multiplexed image frame and a second multiplexed image frame in a base layer (BL) video signal and an enhancement layer (EL) video signal. The first multiplexed image frame may comprise horizontal high resolution image data for both eyes, while the second multiplexed image frame may comprise vertical high resolution image data for both eyes. Encoded symmetric-resolution image data for the 3D image may be distributed to a wide variety of devices for 3D image processing and rendering. A recipient device may reconstruct reduced resolution 3D image from one of the first multiplexed image frame or the second multiplexed image frame. A recipient device may also reconstruct high resolution 3D image by combining high resolution image data from both of the first multiplexed image frame and the second multiplexed image frame.
Abstract:
In high-dynamic range (HDR) coding, content mapping translates an HDR signal to a signal of lower dynamic range. Coding and prediction in layered coding of HDR signals is improved if content mapping utilizes signal color ranges beyond those defined by a traditional electro-optical transfer function (EOTF) or its inverse (IEOTF or OETF). Extended EOTF and IEOTF functions are derived based on their mirror points. Examples of extended EOTFs are given for ITU BT. 1886 and SMPTE ST 2084.
Abstract:
Given existing color remapping information (CRI) messaging variables, methods are described to communicate color volume information for a targeted display to a downstream receiver. Bits 7:0 of the 32-bit colour_remap_id are used to extract a first value. If the first value is not a reserved value, then the first value is used as an index to a look-up table to generate a first luminance value for a targeted display, otherwise a second value is generated based on bits 31:9 in the colour_remap_id messaging variable and the first luminance value for a targeted display is generated based on the second value. The methods may be applied to communicate via CRI messaging a minimum luminance value, a maximum luminance value, and color primaries information of the targeted display.
Abstract:
Video data with enhanced dynamic range (EDR) are color graded for a first and a second reference display with different dynamic range characteristics to generate a first color-graded output, a second color graded output, and associated first and second sets of metadata. The first color-graded output and the two sets of metadata are transmitted from an encoder to a decoder to be displayed on a target display which may be different than the second reference display. At the receiver, a decoder interpolates between the first and second set of metadata to generate a third set of metadata which drives the display management process for displaying the received video data onto the target display. The second set of metadata of metadata may be represented as delta metadata values from the first set of metadata values.
Abstract:
A sequence of enhanced dynamic range (EDR) images and a sequence of standard dynamic range images are encoded using a backwards-compatible SDR high-definition (HD) base layer and one or more enhancement layers. The EDR and SDR video signals may be of the same resolution (e.g., HD) or at different resolutions (e.g., 4K and HD) and are encoded using a dual-view-dual-layer (DVDL) encoder to generate a coded base layer (BL) and a coded enhancement layer (EL). The DVDL encoder includes a reference processing unit (RPU) which is adapted to compute a reference stream based on the coded BL stream. The RPU operations include post-processing, normalization, inverse normalization, and image registration. Decoders for decoding the coded BL and EL streams to generate a backwards compatible 2D SDR stream and additional 2D or 3D SDR or EDR streams, are also described.
Abstract:
Methods and systems for frame rate scalability are described. Support is provided for input and output video sequences with variable frame rate and variable shutter angle across scenes, or for input video sequences with fixed input frame rate and input shutter angle, but allowing a decoder to generate a video output at a different output frame rate and shutter angle than the corresponding input values. Techniques allowing a decoder to decode more computationally-efficiently a specific backward compatible target frame rate and shutter angle among those allowed are also presented.