Abstract:
A method of processing an image includes obtaining the image; determining a point spread function for the image; applying a filter, based on the point spread function, to at least a portion of the image to form a filtered image; and generating a processed image by blending the filtered image with the image or another filtered image, wherein a first portion of the processed image is generated using a different amount of blending of the filtered image with the image or other filtered image than is used for a second portion of the processed image.
Abstract:
In a digital camera or other image acquisition device, motion vectors between successive image frames of an object scene are calculated from normalized values of pixel luminance in order to reduce or eliminate any effects on the motion calculation that might occur when the object scene is illuminated from a time varying source such as a fluorescent lamp. Calculated motion vectors are checked for accuracy by a robustness matrix.
Abstract:
The likelihood of a particular type of object, such as a human face, being present within a digital image, and its location in that image, are determined by comparing the image data within defined windows across the image in sequence with two or more sets of data representing features of the particular type of object. The evaluation of each set of features after the first is preferably performed only on data of those windows that pass the evaluation with respect to the first set of features, thereby quickly narrowing potential target windows that contain at least some portion of the object. Correlation scores are preferably calculated by the use of non-linear interpolation techniques in order to obtain a more refined score. Evaluation of the individual windows also preferably includes maintaining separate feature set data for various positions of the object around one axis and rotating the feature set data with respect to the image data for the individual windows about another axis.
Abstract:
A system and method for capturing images is provided. In the system and method, preview images are acquired and global local and local motion are estimated based on at least a portion of the preview images. If the local motion is less than or equal to the global motion, a final image is captured based at least on an exposure time based on the global motion. If the local motion is greater than the global motion, a first image is captured based on at least a first exposure time and at least a second image is captured based on at least one second exposure time less than the first exposure time. After capturing the first and second images, global motion regions are separated from local motion regions in the first and second images, and the final image is reconstructed at least based on the local motion regions.
Abstract:
Systems and methods are provided for reducing eye coloration artifacts in an image. In the system and method, an eye is detected in the image and a pupil color for the eye in the image and a skin color of skin in the image associated with the eye are determined. At least one region of artifact coloration in the eye in the image is then identified based on the pupil color and the skin color, and a coloration of the region is modified to compensate for the artifact coloration.
Abstract:
A camera that provides for a panorama mode of operation that employs internal software and internal acceleration hardware to stitch together two or more captured images to create a single panorama image with a wide format. Captured images are projected from rectilinear coordinates into cylindrical coordinates with the aid of image interpolation acceleration hardware. Matches are quickly determined between each pair of images with a block based search that employs motion estimation acceleration hardware. Transformation are found, utilizing regression and robust statistics techniques, to align the captured images with each other, which are applied to the images using the interpolation acceleration hardware. A determination is made for an optimal seam to stitch images together in the overlap region by finding a path which cuts through relatively non-noticeable regions so that the images can be stitched together into a single image with a wide panoramic effect.
Abstract:
The likelihood of a particular type of object, such as a human face, being present within a digital image, and its location in that image, are determined by comparing the image data within defined windows across the image in sequence with two or more sets of data representing features of the particular type of object. The evaluation of each set of features after the first is preferably performed only on data of those windows that pass the evaluation with respect to the first set of features, thereby quickly narrowing potential target windows that contain at least some portion of the object. Correlation scores are preferably calculated by the use of non-linear interpolation techniques in order to obtain a more refined score. Evaluation of the individual windows also preferably includes maintaining separate feature set data for various positions of the object around one axis and rotating the feature set data with respect to the image data for the individual windows about another axis.
Abstract:
In a digital camera or other image acquisition device, motion vectors between successive image frames of an object scene are calculated from normalized values of pixel luminance in order to reduce or eliminate any effects on the motion calculation that might occur when the object scene is illuminated from a time varying source such as a fluorescent lamp. Calculated motion vectors are checked for accuracy by a robustness matrix.
Abstract:
A system, method, and computer program product for capturing images for later refocusing. Embodiments estimate a distance map for a scene, determine a number of principal depths, capture a set of images, with each image focused at one of the principal depths, and process captured images to produce an output image. The scene is divided into regions, and the depth map represents region depths corresponding to a particular focus step. Entries having a specific focus step value are placed into a histogram, and depths having the most entries are selected as the principal depths. Embodiments may also identify scene areas having important objects and include different important object depths in the principal depths. Captured images may be selected according to user input, aligned, and then combined using blending functions that favor only scene regions that are focused in particular captured images.
Abstract:
Methods for estimating the point spread function (PSF) of a motion-blurred image are disclosed and claimed. In certain embodiments, the estimated PSF may be used to compensate for the blur caused by hand-shake without the use of an accelerometer or gyro. Edge spread functions may be extracted along different directions from straight edges in a blurred image and combined to find the PSF that best matches. In other embodiments, the blur response to edges of other forms may similarly be extracted, such as corners or circles, and combined to find the best matching PSF. The PSF may then be represented in a parametric form, where the parameters used are related to low-order polynomial coefficients of the angular velocity vx(t) and vy(t) as a function of time.