Abstract:
A display system comprises light sources configured to emit first light with a first spectral power distribution; light regeneration layers configured to be stimulated by the first light and to convert at least a portion of the first light and recycled light into second light, the second light comprising (a) primary spectral components that correspond to primary colors and (b) secondary spectral components that do not correspond to the primary colors; and notch filter layers configured to receive a portion of the second light and to filter out the secondary spectral components from the portion of the second light. The portion of the second light can be directed to a viewer of the display system and configured to render images viewable to the viewer.
Abstract:
Techniques are provided to encode and decode image data comprising a tone mapped (TM) image with HDR reconstruction data in the form of luminance ratios and color residual values. In an example embodiment, luminance ratio values and residual values in color channels of a color space are generated on an individual pixel basis based on a high dynamic range (HDR) image and a derivative tone-mapped (TM) image that comprises one or more color alterations that would not be recoverable from the TM image with a luminance ratio image. The TM image with HDR reconstruction data derived from the luminance ratio values and the color-channel residual values may be outputted in an image file to a downstream device, for example, for decoding, rendering, and/or storing. The image file may be decoded to generate a restored HDR image free of the color alterations.
Abstract:
Image encoding and decoding are described. An input HDR image that includes a base image and a ratio image may be stored using two or more color description profiles. One profile defines the encoding color space of the base image and the second profile defines the encoding color space of the HDR metadata which may be different than the color space of the base image.
Abstract:
Techniques for operating a display system in a wide range of ambient light conditions are provided. An intensity of ambient light on a display panel may be detected. The display panel may be illuminated by light sources in addition to the ambient light. An individual light source may be individually settable to an individual light output level. If it is determined that the luminance level of the ambient light is above a minimum ambient luminance threshold, an ambient black level may be calculated using the intensity of ambient light. Light output levels of one or more of the light sources may be elevated to first light output levels. Here, the one or more light sources may be designated to illuminate one or more dark portions of an image. The first light output levels may create a new black level equaling the determined ambient black level.
Abstract:
Techniques for driving a dual modulation display include generating backlight drive signals to drive individually-controllable illumination sources. The illumination sources emit first light onto a light conversion layer. The light conversion layer converts the first light into second light. The light conversion layer can include quantum dots or phosphor materials. Modulation drive signals are generated to determine transmission of the second light through individual subpixels of the display. These modulation drive signals can be adjusted based on one or more light field simulations. The light field simulations can include: (i) a color shift for a pixel based on a point spread function of the illumination sources; (ii) binning difference of individual illumination sources; (iii) temperature dependence of display components on performance; or (iv) combinations thereof.
Abstract:
Techniques are provided to encode and decode image data comprising a tone mapped (TM) image with HDR reconstruction data in the form of luminance ratios and color residual values. In an example embodiment, luminance ratio values and residual values in color channels of a color space are generated on an individual pixel basis based on a high dynamic range (HDR) image and a derivative tone-mapped (TM) image that comprises one or more color alterations that would not be recoverable from the TM image with a luminance ratio image. The TM image with HDR reconstruction data derived from the luminance ratio values and the color-channel residual values may be outputted in an image file to a downstream device, for example, for decoding, rendering, and/or storing. The image file may be decoded to generate a restored HDR image free of the color alterations.
Abstract:
At a first time point, a first light capturing device at a first spatial location in a three-dimensional (3D) space captures first light rays from light sources located at designated spatial locations on a viewer device in the 3D space. At the first time point, a second light capturing device at a second spatial location in the 3D space captures second light rays from the light sources located at the designated spatial locations on the viewer device in the 3D space. Based on the first light rays captured by the first light capturing device and the second light rays captured by the second light capturing device, at least one of a spatial position and a spatial direction, at the first time point, of the viewer device is determined.
Abstract:
While a viewer is viewing a first stereoscopic image comprising a first left image and a first right image, a left vergence angle of a left eye of a viewer and a right vergence angle of a right eye of the viewer are determined. A virtual object depth is determined based at least in part on (i) the left vergence angle of the left eye of the viewer and (ii) the right vergence angle of the right eye of the viewer. A second stereoscopic image comprising a second left image and a second right image for the viewer is rendered on one or more image displays. The second stereoscopic image is subsequent to the first stereoscopic image. The second stereoscopic image is projected from the one or more image displays to a virtual object plane at the virtual object depth.
Abstract:
While a viewer is viewing a first stereoscopic image comprising a first left image and a first right image, a left vergence angle of a left eye of a viewer and a right vergence angle of a right eye of the viewer are determined. A virtual object depth is determined based at least in part on (i) the left vergence angle of the left eye of the viewer and (ii) the right vergence angle of the right eye of the viewer. A second stereoscopic image comprising a second left image and a second right image for the viewer is rendered on one or more image displays. The second stereoscopic image is subsequent to the first stereoscopic image. The second stereoscopic image is projected from the one or more image displays to a virtual object plane at the virtual object depth.
Abstract:
A target view to a 3D scene depicted by a multiview image is determined. The multiview image comprises multiple sampled views. Each sampled view comprises multiple texture images and multiple depth images in multiple image layers. The target view is used to select, from the multiple sampled views of the multiview image, sampled views. A texture image and a depth image for each sampled view in the selected sampled views are encoded into a multiview video signal to be transmitted to a downstream device.