Abstract:
For reducing a delay from panning a camera system, an estimate is received of a physical movement of the camera system. In response to the estimate, a determination is made of whether the camera system is being panned. In response to determining that the camera system is not being panned, most effects of the physical movement are counteracted in a video sequence from the camera system. In response to determining that the camera system is being panned, most effects of the panning are preserved in the video sequence, while concurrently the video sequence is shifted toward a position that balances flexibility in counteracting effects of a subsequent physical movement of the camera system.
Abstract:
A method for brightness and contrast enhancement includes computing a luminance histogram of a digital image, computing first distances from the luminance histogram to a plurality of predetermined luminance histograms, estimating first control point values for a global tone mapping curve from predetermined control point values corresponding to a subset of the predetermined luminance histograms selected based on the computed first distances, and interpolating the estimated control point values to determine the global tone mapping curve. The method may also include dividing the digital image into a plurality of image blocks, and enhancing each pixel in the digital image by computing second distances from a pixel in an image block to the centers of neighboring image blocks, and computing an enhanced pixel value based on the computed second distances, predetermined control point values corresponding to the neighboring image blocks, and the global tone mapping curve.
Abstract:
A method of transforming an N-bit raw wide dynamic range (WDR) Bayer image to a K-bit raw red-green-blue (RGB) image wherein N>K is provided that includes converting the N-bit raw WDR Bayer image to an N-bit raw RGB image, computing a luminance image from the N-bit raw RGB image, computing a pixel gain value for each luminance pixel in the luminance image to generate a gain map, applying a hierarchical noise filter to the gain map to generate a filtered gain map, applying the filtered gain map to the N-bit raw RGB image to generated a gain mapped N-bit RGB image, and downshifting the gain mapped N-bit RGB image by (N−K) to generate the K-bit RGB image.
Abstract:
A method, apparatus and a system multi-camera image processing method. The method includes performing geometric alignment to produce a geometric output, performing photometric alignment to produce a photometric output and blending output, using data from the geometric alignment and the photometric alignment for performing synthesis function for at least one of blending and stitching images from the multi-cameras, and displaying an image from the synthesis function.
Abstract:
A method of automatically focusing a projector in a projection system is provided that includes projecting, by the projector, a binary pattern on a projection surface, capturing an image of the projected binary pattern by a camera synchronized with the projector, computing a depth map from the captured image, and adjusting focus of the projector based on the computed depth map.
Abstract:
A method of noise filtering of a digital video sequence is provided that includes computing a motion image for a frame, wherein the motion image includes a motion value for each pixel in the frame, and wherein the motion values are computed as differences between pixel values in a luminance component of the frame and corresponding pixel values in a luminance component of a reference frame, applying a first spatial noise filter to the motion image to obtain a final motion image, computing a blending factor image for the frame, wherein the blending factor image includes a blending factor for each pixel in the frame, and wherein the blending factors are computed based on corresponding motion values in the final motion image, generating a filtered frame, wherein the blending factors are applied to corresponding pixel values in the reference frame and the frame, and outputting the filtered frame.
Abstract:
A method of generating a high dynamic range (HDR) image is provided that includes capturing a long exposure image and a short exposure image of a scene, computing a merging weight for each pixel location of the long exposure image based on a pixel value of the pixel location and a saturation threshold, and computing a pixel value for each pixel location of the HDR image as a weighted sum of corresponding pixel values in the long exposure image and the short exposure image, wherein a weight applied to a pixel value of the pixel location of the short exposure image and a weight applied to a pixel value of the pixel location in the pixel long exposure image are determined based on the merging weight computed for the pixel location and responsive to motion in a scene of the long exposure image and the short exposure image.
Abstract:
A method of generating a high dynamic range (HDR) image is provided that includes capturing a long exposure image and a short exposure image of a scene, computing a merging weight for each pixel location of the long exposure image based on a pixel value of the pixel location and a saturation threshold, and computing a pixel value for each pixel location of the HDR image as a weighted sum of corresponding pixel values in the long exposure image and the short exposure image, wherein a weight applied to a pixel value of the pixel location of the short exposure image and a weight applied to a pixel value of the pixel location in the pixel long exposure image are determined based on the merging weight computed for the pixel location and responsive to motion in a scene of the long exposure image and the short exposure image.
Abstract:
Disclosed examples include three-dimensional imaging systems and methods to reconstruct a three-dimensional scene from first and second image data sets obtained from a single camera at first and second times, including computing feature point correspondences between the image data sets, computing an essential matrix that characterizes relative positions of the camera at the first and second times, computing pairs of first and second projective transforms that individually correspond to regions of interest that exclude an epipole of the captured scene, as well as computing first and second rectified image data sets in which the feature point correspondences are aligned on a spatial axis by respectively applying the corresponding first and second projective transforms to corresponding portions of the first and second image data sets, and computing disparity values of a stereo disparity map according to the rectified image data sets to construct.
Abstract:
In some embodiments, a computer-readable medium stores executable code, which, when executed by a processor, causes the processor to: capture an image of a finder pattern using a camera; locate a predetermined point within the finder pattern; use the predetermined point to identify multiple boundary points on a perimeter associated with the finder pattern; identify fitted boundary lines based on the multiple boundary points; and locate feature points using the fitted boundary lines.