Abstract:
Methods and systems for generating an image quality metric are described. A reference and a test image are first converted to the ITP color space. After calculating difference images ΔI, ΔT, and ΔP, using the color channels of the two images, the difference images are convolved with low pass filters, one for the I channel and one for the chroma channels (I or P). The image quality metric is computed as a function of the sum of squares of filtered ΔI, ΔT, and ΔP values. The chroma low-pass filter is designed to maximize matching the image quality metric with subjective results.
Abstract:
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
Abstract:
Creative intent input describing emotion expectations and narrative information relating to media content is received. Expected physiologically observable states relating to the media content are generated based on the creative intent input. An audiovisual content signal with the media content and media metadata comprising the physiologically observable states is provided to a playback apparatus. The audiovisual content signal causes the playback device to use physiological monitoring signals to determine, with respect to a viewer, assessed physiologically observable states relating to the media content and generate, based on the expected physiologically observable states and the assessed physiologically observable states, modified media content to be rendered to the viewer.
Abstract:
Methods for encoding and decoding high-dynamic range signals are presented. The signals are encoded in a high frame rate and are accompanied by frame-rate conversion metadata defining a preferred set of frame-rate down-conversion parameters, which are determined according to the maximum luminance of a target display, display playback priority modes, or judder control modes. A decoder uses the frame-rate conversion metadata to apply frame-rate down-conversion to the input high-frame-rate signal according to at least the maximum luminance of the target display and/or the characteristics of the signal itself. Frame-based and pixel-based frame-rate conversions, and judder models for judder control via metadata are also discussed.
Abstract:
Systems and methods are disclosed for dynamically adjusting the backlight of a display during video playback or for generating filtered video metadata. Given an input video stream and associated metadata values of minimum, average, or maximum luminance values of the video frames in the video stream, values of a function of the frame min, mid, or max luminance values are filtered using a temporal filter to generate a filtered output value for each frame. At least one filtering coefficient of the temporal filter is adapted based on a logistic function controlled by slope and sensitivity values. The instantaneous dynamic range of a target display is determined based on the filtered metadata values and the minimum and maximum brightness values of the display.
Abstract:
Representation and coding of multi-view images are carried out through tapestry encoding. A tapestry comprises information on a tapestry image, a left-shift displacement map and a right-shift displacement map. Perspective images of a scene can be generated from the tapestry and the displacement maps. The tapestry image is generated from a leftmost view image, a rightmost view image, a disparity map and an occlusion map.
Abstract:
Representation and coding of multi-view images using tapestry encoding are described for standard and enhanced dynamic ranges compatibility. A tapestry comprises information on a tapestry image, a left-shift displacement map and a right-shift displacement map. Perspective images of a scene can be generated from the tapestry and the displacement maps. Different methods for achieving compatibility are described.
Abstract:
Several embodiments of display systems that use narrowband emitters are disclosed herein. In one embodiment, a display system comprises, for at least one primary color, a plurality of narrowband emitters distributed around the primary color point. The plurality of narrowband emitters provides a more regular power vs. spectral distribution in a desired band of frequencies.
Abstract:
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
Abstract:
An encoder receives a sequence of images in extended or visual dynamic range (VDR). For each image, a dynamic range compression function and associated parameters are selected to convert the input image into a second image with a lower dynamic range. Using the input image and the second image, a residual image is computed. The input VDR image sequence is coded using a layered codec that uses the second image as a base layer and a residual image that is derived from the input and second images as one or more residual layers. Using the residual image, a false contour detection method (FCD) estimates the number of potential perceptually visible false contours in the decoded VDR image and iteratively adjusts the dynamic range compression parameters to prevent or reduce the number of false contours. Examples that use a uniform dynamic range compression function are also described.