Abstract:
Directional image sensor data may be acquired with one or more directional image sensors. A light source and illumination image may be generated based on the directional image sensor data. A number of operations may be caused to be performed for an image based at least in part on light source information in the light source image. The operations may include display management operations, device positional operations, augmented reality superimposition operations, ambient light control operations, etc.
Abstract:
In some embodiments, a method for upmixing input audio comprising N full range channels to generate 3D output audio comprising N+M full range channels, where the N+M full range channels are intended to be rendered by speakers including at least two speakers at different distances from the listener. The N channel input audio is a 2D audio program whose N full range channels are intended for rendering by N speakers nominally equidistant from the listener. The upmixing of the input audio to generate the 3D output audio is typically performed in an automated manner, in response to cues determined in automated fashion from stereoscopic 3D video corresponding to the input audio, or in response to cues determined in automated fashion from the input audio. Other aspects include a system configured to perform, and a computer readable medium which stores code for implementing any embodiment of the inventive method.
Abstract:
An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.