Abstract:
A high resolution 3D image may be encoded into a first multiplexed image frame and a second multiplexed image frame in a base layer (BL) video signal and an enhancement layer (EL) video signal. The first multiplexed image frame may comprise horizontal high resolution image data for both eyes, while the second multiplexed image frame may comprise vertical high resolution image data for both eyes. Encoded symmetric-resolution image data for the 3D image may be distributed to a wide variety of devices for 3D image processing and rendering. A recipient device may reconstruct reduced resolution 3D image from one of the first multiplexed image frame or the second multiplexed image frame. A recipient device may also reconstruct high resolution 3D image by combining high resolution image data from both of the first multiplexed image frame and the second multiplexed image frame.
Abstract:
Perceptually correct noises simulating a variety of noise patterns or textures may be applied to stereo image pairs each of which comprises a left eye (LE) image and a right eye (RE) image that represent a 3D image. LE and RE images may or may not be noise removed. Depth information of pixels in the LE and RE images may be computed from, or received with, the LE and RE images. Desired noise patterns are modulated onto the 3D image or scene so that the desired noise patterns are perceived to be part of 3D objects or image details, taking into account where the 3D objects or image details are on a z-axis perpendicular to an image rendering screen on which the LE and RE images are rendered.
Abstract:
Given a sequence of images in a first codeword representation, methods, processes, and systems are presented for image reshaping using rate distortion optimization, wherein reshaping allows the images to be coded in a second codeword representation which allows more efficient compression than using the first codeword representation. Syntax methods for signaling reshaping parameters are also presented.
Abstract:
In a method to improve the coding efficiency of high-dynamic range (HDR) images, a decoder parses sequence processing set (SPS) data from an input coded bitstream to detect that an HDR extension syntax structure is present in the parsed SPS data. It extracts from the HDR extension syntax structure post-processing information that includes one or more of a color space enabled flag, a color enhancement enabled flag, an adaptive reshaping enabled flag, a dynamic range conversion flag, a color correction enabled flag, or an SDR viewable flag. It decodes the input bitstream to generate a preliminary output decoded signal, and generates a second output signal based on the preliminary output signal and the post-processing information.
Abstract:
Methods and systems for frame rate scalability are described. Support is provided for input and output video sequences with variable frame rate and variable shutter angle across scenes, or for input video sequences with fixed input frame rate and input shutter angle, but allowing a decoder to generate a video output at a different output frame rate and shutter angle than the corresponding input values. Techniques allowing a decoder to decode more computationally-efficiently a specific backward compatible target frame rate and shutter angle among those allowed are also presented.
Abstract:
Methods and systems for frame rate scalability are described. Support is provided for input and output video sequences with variable frame rate and variable shutter angle across scenes, or for input video sequences with fixed input frame rate and input shutter angle, but allowing a decoder to generate a video output at a different output frame rate and shutter angle than the corresponding input values. Techniques allowing a decoder to decode more computationally-efficiently a specific backward compatible target frame rate and shutter angle among those allowed are also presented.
Abstract:
Methods and systems for frame rate scalability are described. Support is provided for input and output video sequences with variable frame rate and variable shutter angle across scenes, or for input video sequences with fixed input frame rate and input shutter angle, but allowing a decoder to generate a video output at a different output frame rate and shutter angle than the corresponding input values. Techniques allowing a decoder to decode more computationally-efficiently a specific backward compatible target frame rate and shutter angle among those allowed are also presented.
Abstract:
Methods and systems for canvas size scalability across the same or different bitstream layers of a video coded bitstream are described. Offset parameters for a conformance window, a reference region of interest (ROI) in a reference layer, and a current ROI in a current layer are received. The width and height of a current ROI and a reference ROI are computed based on the offset parameters and they are used to generate a width and height scaling factor to be used by a reference picture resampling unit to generate an output picture based on the current ROI and the reference ROI.
Abstract:
In a method to improve the coding efficiency of high-dynamic range (HDR) images, a decoder parses sequence processing set (SPS) data from an input coded bitstream to detect that an HDR extension syntax structure is present in the parsed SPS data. It extracts from the HDR extension syntax structure post-processing information that includes one or more of a color space enabled flag, a color enhancement enabled flag, an adaptive reshaping enabled flag, a dynamic range conversion flag, a color correction enabled flag, or an SDR viewable flag. It decodes the input bitstream to generate a preliminary output decoded signal, and generates a second output signal based on the preliminary output signal and the post-processing information.
Abstract:
Sample data and metadata related to spatial regions in images may be received from a coded video signal. It is determined whether specific spatial regions in the images correspond to a specific region of luminance levels. In response to determining the specific spatial regions correspond to the specific region of luminance levels, signal processing and video compression operations are performed on sets of samples in the specific spatial regions. The signal processing and video compression operations are at least partially dependent on the specific region of luminance levels.