Abstract:
Techniques are provided for calculating temporally coherent disparity values for pixels in a sequence of image frames. An example method may include calculating initial spatial disparity costs between a pixel of a first image frame from a reference camera and pixels from an image frame from a secondary camera. The method may also include estimating a motion vector for the pixel of the first reference camera image frame to a corresponding pixel from a second reference camera image frame. The method may further include calculating a confidence value for the estimated motion vector based on a measure of similarity between the colors of the pixels of the first and second image frames from the reference camera. The method may further include calculating temporally coherent disparity costs based on the initial spatial disparity costs weighted by the confidence value and selecting a disparity value based on those costs.
Abstract:
Techniques for improved image disparity estimation are described. In one embodiment, for example, an apparatus may comprise a processor circuit and an imaging management module, and the imaging management module may be operable by the processor circuit to determine a measured horizontal disparity factor and a measured vertical disparity factor for a rectified image array, determine a composite horizontal disparity factor for the rectified image array based on the measured horizontal disparity factor and an implied horizontal disparity factor, and determine a composite vertical disparity factor for the rectified image array based on the measured vertical disparity factor and an implied vertical disparity factor. Other embodiments are described and claimed.
Abstract:
A range is determined for a disparity search for images from an image sensor array. In one example, a method includes receiving a reference image and a second image of a scene from multiple cameras of a camera array, detecting feature points of the reference image, matching points of the detected features to points of the second image, determining a maximum disparity between the reference image and the second image, and determining disparities between the reference image and the second image by comparing points of the reference image to points of the second image wherein the points of the second image are limited to points within the maximum disparity.
Abstract:
Global matching of pixel data across multiple images. Pixel values of an input image are modified to better match a reference image with a slope thresholded histogram matching function. Visual artifacts are reduced by avoiding large pixel value modifications. For large intensity variations across the input and reference image, the slope of the mapping function is thresholded. Modification to the input image is therefore limited and a corresponding modification to the reference image is made to improve the image matching. More than two images may be matched by iteratively modifying a cumulative mass function of a reference image to accommodate thresholded modification of multiple input images. A device may include logic to match pixel values across a plurality of image frames generated from a plurality of image sensors on the device. Once matched, image frames may be reliably processed further for pixel correspondence, or otherwise.
Abstract:
Disparity is determined for images from an array of disparate image sensors. In one example, a method includes processing images from multiple cameras of a camera array to reduce variations caused by variations in the in the image sensors of the respective cameras, the multiple cameras including a reference camera and at least one secondary camera, and determining multiple baseline disparities from the image from the reference camera to the images from the secondary cameras.