Abstract:
A two-dimensional augmentation image is rendered from a three-dimensional model from a first virtual perspective. A transformation is applied to the augmentation image to yield an updated two-dimensional augmentation image that approximates a second virtual perspective of the three-dimensional model without additional rendering from the three-dimensional model.
Abstract:
Embodiments provide a unified method for combining images such as high dynamic range images, flash-no-flash image pairs, and/or other images. Weight masks are defined for each of the plurality of images by calculating coefficients for each of the weight masks. Calculating the coefficients includes, at least, performing histogram alignment between a reference image and each of the other input images and by applying a mismatch bias to the coefficients as a function of the histogram alignment. After applying the weight masks to the corresponding images, the images are combined to produce a final image.
Abstract:
Methods for mapping color data having at least one color associated therewith to an output device based on an input-device profile and an output-device profile, each profile having a tone curve and a color matrix, are provided. In one embodiment, the method includes receiving color data from an input device and determining whether the color data is in a linear space. If it is determined that the color data is not in a linear space, the method further includes applying the tone curve of the input device profile to the color data to convert it into a linear space. The method further includes converting the color(s) associated with the color data from the input linear space to an output linear space by applying the color matrix of the input device profile and the inverse color matrix of the output device profile to create color-converted image data.
Abstract:
Methods and a processing device are provided for restoring pixels damaged by artifacts caused by dust, or other particles, entering a digital image capturing device. A user interface may be provided for a user to indicate an approximate location of an artifact appearing in a digital image. Dust attenuation may be estimated and an inverse transformation, based on the estimated dust attenuation, may be applied to damaged pixels in order to recover an estimate of the underlying digital image. One or many candidate source patch may be selected based on having smallest pixel distances, with respect to a target patch area. The damaged pixels included in the target patch area may be considered when calculating the pixel distance with respect to candidate source patches. RGB values of corresponding pixels of source patches may be used to restore the damaged pixels included in the target patch area.
Abstract:
Methods for mapping color data having at least one color associated therewith to an output device based on an input device profile and an output device profile, each profile having a tone curve and a color matrix, are provided. In one embodiment, the method includes receiving color data from an input device and determining whether the color data is in a linear space. If it is determined that the color data is not in a linear space, the method further includes applying the tone curve of the input device profile to the color data to convert it into a linear space. The method further includes converting the color(s) associated with the color data from the input linear space to an output linear space by applying the color matrix of the input device profile and the inverse color matrix of the output device profile to create color-converted image data.
Abstract:
Various embodiments relating to using motion based view matrix tuning to calibrate a head-mounted display device are disclosed. In one embodiment, the holograms are rendered with different view matrices, each view matrix corresponding to a different inter-pupillary distance. Upon selection by the user of the most stable hologram, the head-mounted display device can be calibrated to the inter-pupillary distance corresponding to the selected most stable hologram.
Abstract:
Disclosed herein are representative embodiments of tools and techniques for using storyboards in controlling a camera for capturing images, photographs, or video. According to one exemplary technique, at least two storyboards are stored. In addition, at least one storyboard identifier from a camera application is received. Also, using the storyboard identifier, a storyboard of the stored at least two storyboards is retrieved. The retrieved storyboard includes a sequence of control frames for controlling a camera. Additionally, a sequence of image frames is captured at least by controlling a camera using the retrieved storyboard.
Abstract:
Embodiments provide a unified method for combining images such as high dynamic range images, flash-no-flash image pairs, and/or other images. Weight masks are defined for each of the plurality of images by calculating coefficients for each of the weight masks. Calculating the coefficients includes, at least, performing histogram alignment between a reference image and each of the other input images and by applying a mismatch bias to the coefficients as a function of the histogram alignment. After applying the weight masks to the corresponding images, the images are combined to produce a final image.
Abstract:
Methods and a processing device are provided for reducing purple fringing artifacts appearing in a digital image. A linear filter may be applied to a digital image to identify purplish candidate regions of pixels. Ones of pixels that are in any of the purplish candidate regions and are within a predefined distance of a high gradient/high contrast region may be identified as damaged pixels. A map of the damaged pixels may then be created, or formed. The damaged pixels may be reconstructed based on interpolation of values from undamaged pixels on a fringe boundary with guidance from a green channel. In various embodiments, the damaged pixels may be reconstructed based on a Poisson blending approach, an approximated Poisson blending approach, or a variety of approaches based on interpolation of values from undamaged pixels on a fringe boundary, with guidance from a green channel.
Abstract:
Disclosed herein are representative embodiments of tools and techniques for using storyboards in controlling a camera for capturing images, photographs, or video. According to one exemplary technique, at least two storyboards are stored. In addition, at least one storyboard identifier from a camera application is received. Also, using the storyboard identifier, a storyboard of the stored at least two storyboards is retrieved. The retrieved storyboard includes a sequence of control frames for controlling a camera. Additionally, a sequence of image frames is captured at least by controlling a camera using the retrieved storyboard.