Abstract:
A system, method, and computer program product are provided for automatically progressively determining focus depth estimates for an imaging device from defocused images. After a depth-from-defocus (DFD) system generates sometimes-noisy estimates for focus depth and optionally a confidence level that the focus depth estimate is correct, embodiments of the present invention process a sequence of such input DFD measures to iteratively decrease the likelihood of focus depth ambiguity and to increase an overall focus depth estimate confidence level. Automatic focus systems for imaging devices may use the outputs of the embodiments to operate more quickly and accurately, either directly or in combination with other focus depth estimation methods, such as calculated sharpness measures. A depth map of a 3D scene may be estimated for creating a pair of images based on a single image.
Abstract:
Methods for estimating illumination parameters under flickering lighting conditions are disclosed. Illumination parameters, such as phase and contrast, of a intensity-varying light source may be estimated by capturing a sequence of video images, either prior to or after a desired still image to be processed. The relative average light intensities of the adjacently-captured images are calculated and used to estimate the illumination parameters applicable to the desired still image. The estimated illumination parameters may be used to calculate the point spread function of a still image for image de-blurring processing. The estimated illumination parameters may also be used to synchronize the exposure timing of a still image to the time when there is the most light, as well as for use in motion estimation during view/video modes.
Abstract:
The likelihood of a particular type of object, such as a human face, being present within a digital image, and its location in that image, are determined by comparing the image data within defined windows across the image in sequence with two or more sets of data representing features of the particular type of object. The evaluation of each set of features after the first is preferably performed only on data of those windows that pass the evaluation with respect to the first set of features, thereby quickly narrowing potential target windows that contain at least some portion of the object. Correlation scores are preferably calculated by the use of non-linear interpolation techniques in order to obtain a more refined score. Evaluation of the individual windows also preferably includes maintaining separate feature set data for various positions of the object around one axis and rotating the feature set data with respect to the image data for the individual windows about another axis.
Abstract:
A device and methods are provided for calculating depth estimation for a digital imaging device are disclosed and claimed. In one embodiment, a method includes detecting a first image associated with a first focus parameter, detecting a second image associated with a second focus parameter, calculating a statistical representation of a region of interest in the first and second images, and determining a ratio for the region of interest based on the statistical representation. The method may further include determining one or more focus characteristics using a memory table based on the determined ratio for the region of interest, and calculating a focus depth for capture of image data based on the determined one or more focus characteristics associated with the memory table.
Abstract:
Methods for estimating illumination parameters under flickering lighting conditions are disclosed. Illumination parameters, such as phase and contrast, of a intensity-varying light source may be estimated by capturing a sequence of video images, either prior to or after a desired still image to be processed. The relative average light intensities of the adjacently-captured images are calculated and used to estimate the illumination parameters applicable to the desired still image. The estimated illumination parameters may be used to calculate the point spread function of a still image for image de-blurring processing. The estimated illumination parameters may also be used to synchronize the exposure timing of a still image to the time when there is the most light, as well as for use in motion estimation during view/video modes.
Abstract:
A device and methods are provided for producing a high dynamic range (HDR) image of a scene are disclosed and claimed. In one embodiment, method includes setting an exposure period of an image sensor of the digital camera and capturing image data based on the exposure period. The method may further include checking the image data to determine whether the number of saturated pixels exceeds a saturation threshold and checking the image data to determine whether the number of cutoff pixels exceeds a cutoff threshold. The method may further include generating a high dynamic range image based on image data captured by the digital camera, wherein the high dynamic range image is generated based on a minimum number of images to capture a full dynamic range of the scene.
Abstract:
A device and methods are provided for calculating depth estimation for a digital imaging device are disclosed and claimed. In one embodiment, a method includes detecting a first image associated with a first focus parameter, detecting a second image associated with a second focus parameter, calculating a statistical representation of a region of interest in the first and second images, and determining a ratio for the region of interest based on the statistical representation. The method may further include determining one or more focus characteristics using a memory table based on the determined ratio for the region of interest, and calculating a focus depth for capture of image data based on the determined one or more focus characteristics associated with the memory table.
Abstract:
Systems and methods are provided for reducing eye coloration artifacts in an image. In the system and method, an eye is detected in the image and a pupil color for the eye in the image and a skin color of skin in the image associated with the eye are determined. At least one region of artifact coloration in the eye in the image is then identified based on the pupil color and the skin color, and a coloration of the region is modified to compensate for the artifact coloration.
Abstract:
A method of face recognition includes generating a recognition database for at least one identified face by obtaining multiple images for each identified face; selecting a subset of distinctive features for each identified face from a set of features, where each of the distinctive features in the subset have at least one calculated value representative of that distinctive feature of the identified face that exceeds a threshold level of distinction from at least one corresponding calculated value for a reference set of faces, and, for each identified face, recording in the recognition database the selected subset of distinctive features. To recognize an image of an unidentified face, a comparison metric is calculated for at least one identified face in the recognition database comparing at least a portion of the selected subset of distinctive features of that identified face with corresponding features for the unidentified face. The comparison metric for that identified face is used to determine if there is a correlation between the unidentified face and the identified face.
Abstract:
A method of processing an image includes determining at least one point spread function for the image. For each point spread function, the image, or at least a portion of the image, is filtered using at least one filter based on that point spread function to generate a corresponding filtered image for each filter. If only a single point spread function is determined then a plurality of different filters are used to individually filter the image, or at least a portion of the image, to generate a plurality of different filtered images. At least one quality metric is determined for each of the filtered images. A final filtered image is selected from the filtered images based on the at least one quality metric for each of the filtered images.