Abstract:
In exemplary implementations of this invention, light from a backlight is transmitted through two stacked LCDs and then through a diffuser. The front side of the diffuser displays a time-varying sequence of 2D images. Processors execute an optimization algorithm to compute optimal pixel states in the first and second LCDs, respectively, such that for each respective image in the sequence, the optimal pixel states minimize, subject to one or more constraints, a difference between a target image and the respective image. The processors output signals to control actual pixel states in the LCDs, based on the computed optimal pixel states. The 2D images displayed by the diffuser have a higher spatial resolution than the native spatial resolution of the LCDs. Alternatively, the diffuser may be switched off, and the device may display either (a) 2D images with a higher dynamic range than the LCDs, or (b) an automultiscopic display.
Abstract:
In exemplary implementations, this invention comprises apparatus for retinal self-imaging. Visual stimuli help the user self-align his eye with a camera. Bi-ocular coupling induces the test eye to rotate into different positions. As the test eye rotates, a video is captured of different areas of the retina. Computational photography methods process this video into a mosaiced image of a large area of the retina. An LED is pressed against the skin near the eye, to provide indirect, diffuse illumination of the retina. The camera has a wide field of view, and can image part of the retina even when the eye is off-axis (when the eye's pupillary axis and camera's optical axis are not aligned). Alternately, the retina is illuminated directly through the pupil, and different parts of a large lens are used to image different parts of the retina. Alternately, a plenoptic camera is used for retinal imaging.
Abstract:
In exemplary implements of this invention, a lens and sensor of a camera are intentionally destabilized (i.e., shifted relative to the scene being imaged) in order to create defocus effects. That is, actuators in a camera move a lens and a sensor, relative to the scene being imaged, while the camera takes a photograph. This motion simulates a larger aperture size (shallower depth of field). Thus, by translating a lens and a sensor while taking a photo, a camera with a small aperture (such as a cell phone or small point and shoot camera) may simulate the shallow DOF that can be achieved with a professional SLR camera. This invention may be implemented in such a way that programmable defocus effects may be achieved. Also, approximately depth-invariant defocus blur size may be achieved over a range of depths, in some embodiments of this invention.
Abstract:
In exemplary implementations of this invention, a light source illuminates a scene and a light sensor captures data about light that scatters from the scene. The light source emits multiple modulation frequencies, either in a temporal sequence or as a superposition of modulation frequencies. Reference signals that differ in phase are applied to respective subregions of each respective pixel. The number of subregions per pixel, and the number of reference signals per pixel, is preferably greater than four. One or more processors calculate a full cross-correlation function for each respective pixel, by fitting light intensity measurements to a curve, the light intensity measurements being taken, respectively, by respective subregions of the respective pixel. The light sensor comprises M subregions. A lenslet is placed over each subregion, so that each subregion images the entire scene. At least one temporal sequence of frames is taken, one frame per subregion.
Abstract:
In exemplary implementations of this invention, a camera can capture multiple millions of frames per second, such that each frame is 2D image, rather than a streak. A light source in the camera emits ultrashort pulses of light to illuminate a scene. Scattered light from the scene returns to the camera. This incoming light strikes a photocathode, which emits electrons, which are detected by a set of phosphor blocks, which emit light, which is detected by a light sensor. Voltage is applied to plates to create an electric field that deflects the electrons. The voltage varies in a temporal “stepladder” pattern, deflecting the electrons by different amounts, such that the electrons hit different phosphor blocks at different times during the sequence. Each phosphor block (together with the light sensor) captures a separate frame in the sequence. A mask may be used to increase resolution.
Abstract:
In exemplary implementations of this invention, a multi-frequency ToF camera mitigates the effect of multi-path interference (MPI), and can calculate an accurate depth map despite MPI. A light source in the multi-frequency camera emits light in a temporal sequence of different frequencies. For example, the light source can emit a sequence of ten equidistant frequencies f=10 MHz, 20 MHz, 30 MHz, . . . , 100 MHz. At each frequency, a lock-in sensor within the ToF camera captures 4 frames. From these 4 frames, one or more processors compute, for each pixel in the sensor, a single complex number. The processors stack all of such complex quantities (one such complex number per pixel per frequency) and solve for the depth and intensity, using a spectral estimation technique.
Abstract:
A single camera acquires an input image of a scene as observed in an array of spheres, wherein pixels in the input image corresponding to each sphere form a sphere image. A set of virtual cameras are defined for each sphere on a line joining a center of the sphere and a center of projection of the camera, wherein each virtual camera has a different virtual viewpoint and an associated cone of rays, appearing as a circle of pixels on its virtual image plane. A projective texture mapping of each sphere image is applied to all of the virtual cameras on the virtual image plane to produce a virtual camera image comprising circle of pixels. Each virtual camera image for each sphere is then projected to a refocusing geometry using a refocus viewpoint to produce a wide-angle lightfield view, which are averaged to produce a refocused wide-angle image.
Abstract:
In an illustrative implementation of this invention, an optical pattern that encodes binary data is printed on a transparency. For example, the pattern may comprise data matrix codes. A lenslet is placed at a distance equal to its focal length from the optical pattern, and thus collimates light from the optical pattern. The collimated light travels to a conventional camera. For example, the camera may be meters distant. The camera takes a photograph of the optical pattern at a time that the camera is not focused on the scene that it is imaging, but instead is focused at infinity. Because the light is collimated, however, a focused image is captured at the camera's focal plane. The binary data in the pattern may include information regarding the object to which the optical pattern is affixed and information from which the camera's pose may be calculated.
Abstract:
Embodiments of the invention describe a method for reducing a blur in an image of a scene. First, we acquire a set of images of the scene, wherein each image in the set of images includes an object having a blur associated with a point spread function (PSF) forming a set of point spread functions (PSFs), wherein the set of PSFs is suitable for null-filling operation. Next, we invert jointly the set of images and the set of PSFs to produce an output image having a reduced blur.
Abstract:
In illustrative implementations of this invention, multi-path analysis of transient illumination is used to reconstruct scene geometry, even of objects that are occluded from the camera. An ultrafast camera system is used. It comprises a photo-sensor (e.g., accurate in the picosecond range), a pulsed illumination source (e.g. a femtosecond laser) and a processor. The camera emits a very brief light pulse that strikes a surface and bounces. Depending on the path taken, part of the light may return to the camera after one, two, three or more bounces. The photo-sensor captures the returning light bounces in a three-dimensional time image I(x,y,t) for each pixel. The camera takes different angular samples from the same viewpoint, recording a five-dimensional STIR (Space Time Impulse Response). A processor analyzes onset information in the STIR to estimate pairwise distances between patches in the scene, and then employs isometric embedding to estimate patch coordinates.