Abstract:
Methods and systems for frame rate scalability are described. Support is provided for input and output video sequences with variable frame rate and variable shutter angle across scenes, or for input video sequences with fixed input frame rate and input shutter angle, but allowing a decoder to generate a video output at a different output frame rate and shutter angle than the corresponding input values. Techniques allowing a decoder to decode more computationally-efficiently a specific backward compatible target frame rate and shutter angle among those allowed are also presented.
Abstract:
Given a sequence of images in a first codeword representation, methods, processes, and systems are presented for image reshaping using rate distortion optimization, wherein reshaping allows the images to be coded in a second codeword representation which allows more efficient compression than using the first codeword representation. Syntax methods for signaling reshaping parameters are also presented.
Abstract:
In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.
Abstract:
Methods to improve the quality of coding high-dynamic range (HDR) signals in the ICtCp color space are presented. Techniques are described to a) generate optimum chroma Offset and scaling parameters, b) compute chroma mode decisions by optimizing mode selection distortion metrics based on chroma saturation and hue angle values, and c) preserving iso-luminance by maintaining in-gamut values during chroma down-sampling and chroma up-sampling operations.
Abstract:
In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.
Abstract:
A high dynamic range input video signal characterized by either a gamma-based or a perceptually-quantized (PQ) source electro-optical transfer function (EOTF) is to be compressed. Given a luminance range for an image-region in the input, for a gamma-coded input signal, a rate-control adaptation method in the encoder adjusts a region-based quantization parameter (QP) so that it increases in highlight regions and decreases in dark regions, otherwise, for a PQ-coded input, the region-based QP increases in the dark areas and decreases in the highlight areas.
Abstract:
A multi-layer video system has a first layer encoder that encodes a first layer of video information, at least one second layer encoder that encodes at least one second layer of video information, and an encoder side reference processing unit (RPU) that estimates one or more of an optimal filter or an optimal process that applies on a reference picture that is reconstructed from the first video information layer, and processes a current picture of the second video information layer, based on a correlation between the first layer reconstructed reference picture. The correlation relates to a complexity characteristic that scaleably corresponds to the first video information layer reconstructed reference picture and the second video information layer current picture. A scalable video bitstream is outputted, which may be decoded by a compatible decoder. A decoder side RPU and the encoder side RPU function as an RPU pair.
Abstract:
A high dynamic range input video signal characterized by either a gamma-based or a perceptually-quantized (PQ) source electro-optical transfer function (EOTF) is to be compressed. Given a luminance range for an image-region in the input, for a gamma-coded input signal, a rate-control adaptation method in the encoder adjusts a region-based quantization parameter (QP) so that it increases in highlight regions and decreases in dark regions, otherwise, for a PQ-coded input, the region-based QP increases in the dark areas and decreases in the highlight areas.
Abstract:
Video data are coded in a coding-standard layered bit stream. Given a base layer (BL) and one or more enhancement layer (EL) signals, the BL signal is coded into a coded BL stream using a BL encoder which is compliant to a first coding standard. In response to the BL signal and the EL signal, a reference processing unit (RPU) determines RPU processing parameters. In response to the RPU processing parameters and the BL signal, the RPU generates an inter-layer reference signal. Using an EL encoder which is compliant to a second coding standard, the EL signal is coded into a coded EL stream, where the encoding of the EL signal is based at least in part on the inter-layer reference signal. Receivers with an RPU and video decoders compliant to both the first and the second coding standards may decode both the BL and the EL coded streams.
Abstract:
Embodiments of the present disclosure relate to processing audio or video signals captured by multiple devices. An apparatus for processing video and audio signals includes an estimating unit and a processing unit. The estimating unit may estimate at least one aspect of an array at least based on at least one video or audio signal captured respectively by at least one of portable devices arranged in an array. The processing unit may apply the aspect at least based on video to a process of generating a surround sound signal via the array, or apply the aspect at least based on audio to a process of generating a combined video signal via the array. With cross-referencing visual or acoustic hints, an improvement can be achieved in generating an audio or video signal.