Abstract:
In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.
Abstract:
Input video signals characterized by a source electro-optical transfer function (EOTF) are to be blended and displayed on a target display with a target EOTF which is different than the source EOTF. Given an input set of blending parameters, an output set of blending parameters is generated as follows. The input blending parameters are scaled by video signal metrics computed in the target EOTF to generate scaled blending parameters. The scaled blended parameters are mapped back to the source EOTF space to generate mapped blending parameters. Finally the mapped blending parameters are normalized to generate the output blending parameters. An output blended image is generating by blending the input video signals using the output blending parameters. Examples of generating the video signal metrics are also provided.
Abstract:
Input video signals characterized by a source electro-optical transfer function (EOTF) are to be blended and displayed on a target display with a target EOTF which is different than the source EOTF. Given an input set of blending parameters, an output set of blending parameters is generated as follows. The input blending parameters are scaled by video signal metrics computed in the target EOTF to generate scaled blending parameters. The scaled blended parameters are mapped back to the source EOTF space to generate mapped blending parameters. Finally the mapped blending parameters are normalized to generate the output blending parameters. An output blended image is generating by blending the input video signals using the output blending parameters. Examples of generating the video signal metrics are also provided.
Abstract:
In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized. HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.
Abstract:
In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized. HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.
Abstract:
Methods and systems for multi-step display mapping and metadata reconstruction for high-dynamic range (HDR) images are described. In an encoder, given an HDR input image with input HDR metadata in a first dynamic range, an intermediate, base layer image in a second dynamic range is constructed based on the input image. In a decoder, using base-layer metadata, the input HDR metadata, and dynamic range characteristics of a target display, a processor generates reconstructed metadata which when used in combination with the base layer image allow a display mapping process to map the base layer image to the target display as if it was mapping directly the HDR image to the target display.
Abstract:
In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized. HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.
Abstract:
In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.
Abstract:
Methods and systems are described for processing an image captured with an image sensor, such as a camera. In one embodiment, an estimated ambient light level of the captured image is determined and used to compute an optical-optical transfer function (OOTF) that is used to correct the image to preserve an apparent contrast of the image under the estimated ambient light level in a viewing environment. The estimated ambient light level is determined by scaling pixel values from the image sensor using a function that includes exposure parameters and a camera specific parameter derived from a camera calibration.
Abstract:
In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.