Abstract:
A processor for video coding receives a full-frame rate (FFR) HDR video signal and a corresponding FFR SDR video signal. An encoder generates a scalable bitstream that allows decoders to generate half-frame-rate (HFR) SDR, FFR SDR, HFR HDR, or FFR HDR signals. Given odd and even frames of the input FFR SDR signal, the scalable bitstream combines a base layer of coded even SDR frames with an enhancement layer of coded packed frames, where each packed frame includes a downscaled odd SDR frame, a downscaled even HDR residual frame, and a downscaled odd HDR residual frame. In an alternative implementation, the scalable bitstream combines four signals layers: a base layer of even SDR frames, an enhancement layer of odd SDR frames, a base layer of even HDR residual frames and an enhancement layer of odd HDR residual frames. Corresponding decoder architectures are also presented.
Abstract:
Methods are described to communicate source color volume information in a coded bitstream using SEI messaging. Such data include at least the minimum, maximum, and average luminance values in the source data plus optional data that may include the color volume x and y chromaticity coordinates for the input color primaries (e.g., red, green, and blue) of the source data, and the color x and y chromaticity coordinates for the color primaries corresponding to the minimum, average, and maximum luminance values in the source data. Messaging data signaling an active region in each picture may also be included.
Abstract:
Methods and apparatus for transmission of volumetric images in the MPI format. According to an example embodiment, texture and alpha layers of multiplane images are packed, as tiles, into a sequence of video frames. The sequence of video frames is then compressed to generate a video bitstream, which is transmitted together with a metadata bitstream specifying at least the parameters of the packing arrangement for the tiles in the sequence of video frames. Example packing arrangements include various selectable spatial and temporal arrangements for texture layers, alpha layers, and camera views. In some examples, the metadata bitstream is implemented using a SEI message and includes parameters selected from the group consisting of a size of the reference view, the number of layers in the multiplane image, the number of simultaneous views, one or more characteristics of the packing arrangement, layer merging information, dynamic range adjustment information, and reference view information.
Abstract:
Given a sequence of images in a first codeword representation, methods, processes, and systems are presented for integrating reshaping into a next generation video codec for encoding and decoding the images, wherein reshaping allows part of the images to be coded in a second codeword representation which allows more efficient compression than using the first codeword representation. A variety of architectures are discussed, including: an out-of-loop reshaping architecture, an in-loop-for intra pictures only reshaping architecture, an in-loop architecture for prediction residuals, and a hybrid in-loop reshaping architecture. Syntax methods for signaling reshaping parameters, and image-encoding methods optimized with respect to reshaping are also presented.
Abstract:
Methods, systems, and bitstream syntax are described for metadata signaling and conversion for film grain encoding and synthesis. Given a bitstream with MPEG film-grain SEI messaging, for each picture, a processor: detects if the film grain model is suitable for film-grain synthesis using the AV1 autoregressive with additive blending noise model, and then: transcodes the MPEG film grain SEI parameters to corresponding AV1 film grain parameters, synthesizes the film grain, and adds it to the decoded video pictures according to the AV1 specification. An example process for translating AV1 parameters to MPEG film-grain SEI messaging is also provided.
Abstract:
Methods and systems for frame rate scalability are described. Support is provided for input and output video sequences with variable frame rate and variable shutter angle across scenes, or for input video sequences with fixed input frame rate and input shutter angle, but allowing a decoder to generate a video output at a different output frame rate and shutter angle than the corresponding input values. Techniques allowing a decoder to decode more computationally-efficiently a specific backward compatible target frame rate and shutter angle among those allowed are also presented.
Abstract:
In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized. HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.
Abstract:
Implementations are provided that relate, for example, to view tiling in video encoding and decoding. A particular method includes accessing a video picture that includes multiple pictures combined into a single picture (826), accessing information indicating how the multiple pictures in the accessed video picture are combined (806, 808, 822), decoding the video picture to provide a decoded representation of at least one of the multiple pictures (824, 826), and providing the accessed information and the decoded video picture as output (824, 826). Some other implementations format or process the information that indicates how multiple pictures included in a single video picture are combined into the single video picture, and format or process an encoded representation of the combined multiple pictures.
Abstract:
The precision of up-sampling operations in a layered coding system is preserved when operating on video data with high bit-depth. In response to bit-depth requirements of the video coding or decoding system, scaling and rounding parameters are determined for a separable up-scaling filter. Input data are first filtered across a first spatial direction using a first rounding parameter to generate first up-sampled data. First intermediate data are generated by scaling the first up-sampled data using a first shift parameter. The intermediate data are then filtered across a second spatial direction using a second rounding parameter to generate second up-sampled data. Second intermediate data are generated by scaling the second up-sampled data using a second shift parameter. Final up-sampled data may be generated by clipping the second intermediate data.
Abstract:
Methods, systems, and bitstream syntax are described for the fusion of latent features in multi-level, end-to-end, neural networks used in image and video compression. The fused architecture may be static or dynamic based on image characteristics (e.g., natural images versus screen content images) or other coding parameters, such as bitrate constrains or rate-distortion optimization. A variety of multi-level fusion architectures are discussed.