Abstract:
In an exemplary implementation of this invention, light from a scattering scene passes through a spatial light attenuation pattern and strikes a sensor plane of a camera. Based on said camera's measurements of the received light, a processing unit calculates angular samples of the received light. Light that strikes the sensor plane at certain angles comprises both scattered and directly transmitted components; whereas light that strikes at other angles comprises solely scattered light. A processing unit calculates a polynomial model for the intensity of scattered-only light that falls at the latter angles, and further estimates the direct-only component of the light that falls at the former angles. Further, a processing unit may use the estimated direct component to calculate a reconstructed 3D shape, such as a 3D shape of a finger vein pattern, using an algebraic reconstruction technique.
Abstract:
Glare is reduced by acquiring an input image with a camera having a lens and a sensor, in which a pin-hole mask is placed in close proximity to the sensor. The mask localizes the glare at readily identifiable pixels, which can then be filtered to produce a glare reduce output image.
Abstract:
A method and system determines a 3D pose of an object in a scene. Depth edges are determined from a set of images acquired of a scene including multiple objects while varying illumination in the scene. The depth edges are linked to form contours. The images are segmented into regions according to the contours. An occlusion graph is constructed using the regions. The occlusion graph includes a source node representing an unoccluded region of an unoccluded object in scene. The contour associated with the unoccluded region is compared with a set of silhouettes of the objects, in which each silhouette has a known pose. The known pose of a best matching silhouette is selected as the pose of the unoccluded object.
Abstract:
A method and system deblurs images acquired of a scene by a camera. A light field acquired of a scene is modulated temporally according to a sequence of ons and offs. The modulated light field is integrated by a sensor of a camera during an exposure time to generate an encoded input image. The encoded input image is decoded according to a pseudo-inverse of a smearing matrix to produce a decoded output image having a reduced blur.
Abstract:
A method detects silhouette edges in images. An ambient image is acquired of a scene with ambient light. A set of illuminated images is also acquired of the scene. Each illuminated image is acquired with a different light source illuminating the scene. The ambient image is combined with the set of illuminated to detect cast shadows, and silhouette edge pixels are located from the cast shadows.
Abstract:
A method generates a high dynamic range image by first acquiring a set of images of a scene illuminated by different lighting conditions. The set of images are then combined to generate a high dynamic range image.
Abstract:
A method and system generate an enhanced output image. A first image is acquired of a scene illuminated by a first illumination condition. A second image is acquired of the scene illuminated by a second illumination condition. First and second gradient images are determined from the first and second images. Orientations of gradients in the first and second gradient images are compared to produce a combined gradient image, and an enhanced output image is constructed from the combined gradient image.
Abstract:
A method enhances images of a naturally illuminated scene. First, a set of images Ii(x, y) is acquired of a scene. Each image is acquired under a different uncontrolled illumination. For each image Ii(x, y), intensity gradients ∇Ii(x, y) are determined, and each image is weighted with weights according to the intensity gradients. The weighted images are then combined to construct an enhanced image I′.
Abstract:
A method generates an image with de-emphasized textures. Each pixel in the image is classified as either a silhouette edge pixel, a texture edge pixels, or a featureless pixel. A mask image M(x, y) is generated, wherein an intensity of a given pixel (x, y) in the mask image M(x, y) is zero if the pixel (x, y) is classified as the texture edge pixel, is d(x, y) if the pixel (x, y) is classified as the featureless pixel, and is one if the pixel (x, y) is classified as the silhouette edge pixel. An intensity gradient ∇I(x, y) is determined in the masked image, and the intensity gradients in the masked image are integrated according to G(x, y)=∇I(x, y). M(x, y). Then, an output image I′ is generated by minimizing |∇I′−G|, and normalizing the intensities in the output image I′.
Abstract:
A camera includes a set of sensing elements. Each sensing element is arranged at an image plane in an energy field to measure a magnitude and sign of a local gradient of the energy field. Each sensing element includes at least two energy intensity sensors. Each energy sensor measures a local intensity of the energy field. Logarithms of the intensities are subtracted from each other to obtain the magnitude and sign of the gradients. A Poisson equation is solved using the gradients to obtain an output image of the energy field.