Abstract:
A system and method of editing video content includes receiving input video data; converting the input video data to a predetermined format; generating a plurality of initial metadata values for a frame of the converted video data, the plurality of initial metadata values including a first metadata value corresponding to a first fixed value not calculated from a content including the frame, a second metadata value corresponding to an average luminance value of the frame, and a third metadata value corresponding to a second fixed value not calculated from the content, wherein the first metadata value, the second metadata value, and the third metadata value include information used by a decoder to render a decoded image on a display.
Abstract:
A method for generating metadata for use by a video decoder for displaying video content encoded by a video encoder includes: (1) accessing a target tone mapping curve; (2) accessing a decoder tone curve corresponding to a tone curve used by the video decoder for tone mapping the video content; (3) generating a plurality of parameters of a trim-pass function used by the video decoder to apply after applying the decoder tone curve to the video content, wherein the parameters of the trim-pass function are generated to approximate the target tone curve with the combination of the trim-pass function and the decoder tone curve, and (4) generating the metadata for use by the video decoder, including said plurality of parameters of the trim-pass function.
Abstract:
In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized. HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.
Abstract:
A display management processor receives an input image with enhanced dynamic range to be displayed on a target display which has a different dynamic range than a reference display. The input image is first transformed into a perceptually-quantized (PQ) color space, preferably the IPT-PQ color space. A color volume mapping function, which includes an adaptive tone-mapping function and an adaptive gamut mapping function, generates a mapped image. A detail-preservation step is applied to the intensity component of the mapped image to generate a final mapped image with a filtered tone-mapped intensity image. The final mapped image is then translated back to the display's preferred color space. Examples of the adaptive tone mapping and gamut mapping functions are provided.
Abstract:
Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.
Abstract:
In one embodiment, a dual modulator display systems and methods for rendering target image data upon the dual modulator display system are disclosed where the display system receives target image data, possible HDR image data and first calculates display control signals and then calculates backlight control signals from the display control signals. This order of calculating display signals and then backlight control signals later as a function of the display systems may tend to reduce clipping artifacts. In other embodiments, it is possible to split the input target HDR image data into a base layer and a detail layer, wherein the base layer is the low spatial resolution image data that may be utilized as for backlight illumination data. The detail layer is higher spatial resolution image data that may be utilized for display control data.
Abstract:
An illuminator for a reflective display incorporates a light guide having substantially transparent front and rear planar surfaces which overlap the display's viewing surface when the surfaces are substantially parallel and adjacent to the viewing surface. A light source emits light into the light guide. A plurality of light redirecting structures is distributed on the light guide's rear surface. The structures are shaped to redirect through the light guide toward the viewing surface light rays which encounter the structures. Most light rays emitted into the light guide by the light source which do not encounter any of the structures are confined within the light guide by total internal reflection. Most light rays emitted into the light guide by the light source which encounter any of the structures are redirected through the light guide toward the viewing surface, substantially uniformly illuminating the display in a low ambient light environment.
Abstract:
A method for representing a three-dimensional scene stored as a three-dimensional data set includes determining a set of P depth-plane depths along a viewing direction. The method includes generating, from a three-dimensional data set, a proxy three-dimensional data set including P proxy images by, for each depth-plane depth: generating a proxy image of the P proxy images from at least one cross-sectional image of a plurality of transverse cross-sectional images that (i) constitute the three-dimensional data set and (ii) each represent a respective transverse cross-section of the three-dimensional scene at a respective scene-depth.
Abstract:
A volumetric image of a scene can be created, in one embodiment, by recording, through a camera in a device, a series of images of the scene as the camera is moved along a path relative to the scene; during the recording, the device stores motion path metadata about the path, and the series of images is associated with the motion path metadata and a metadata label is associated with the series of images, the metadata label indicating that the recorded series of images represent a volumetric image of the scene. The series of images, the motion path metadata and the metadata label can be assembled into a package for distribution to devices that can view the volumetric image, which may be referred to as a limited volumetric image. The devices that receive the volumetric image can display the individual images in the series of images or as a video.
Abstract:
In one embodiment, methods, media, and systems process and display light field images using a view function that is based on pixel locations in the image and on the viewer's distance (observer's Z position) from the display. The view function can be an angular view function that specifies different angular views for different pixels in the light field image based on the inputs that can include: the x or y pixel location in the image, the viewer's distance from the display, and the viewer's angle relative to the display. In one embodiment, light field metadata, such as angular range metadata and/or angular offset metadata can be used to process and display the image. In one embodiment, color volume mapping metadata can be used to adjust color volume mapping based on the determined angular views; and the color volume mapping metadata can also be adjusted based on angular offset metadata.