Abstract:
In exemplary implementations of this invention, two LCD screens display a multi-view 3D image that has both horizontal and vertical parallax, and that does not require a viewer to wear any special glasses. Each pixel in the LCDs can take on any value: the pixel can be opaque, transparent, or any shade between. For regions of the image that are adjacent to a step function (e.g., a depth discontinuity) and not adjacent to a sharp corner, the screens display local parallax barriers comprising many small slits. The barriers and the slits tend to be oriented perpendicular to the local angular gradient of the target light field. In some implementations, the display is optimized to seek to minimize the Euclidian distance between the desired light field and the actual light field that is produced. Weighted, non-negative matrix factorization (NMF) is used for this optimization.
Abstract:
In exemplary implementations of this invention, a time of flight camera (ToF camera) can estimate the location, motion and size of a hidden moving object, even though (a) the hidden object cannot be seen directly (or through mirrors) from the vantage point of the ToF camera (including the camera's illumination source and sensor), and (b) the object is in a visually cluttered environment. The hidden object is a NLOS (non-line-of-sight) object. The time of flight camera comprises a streak camera and a laser. In these exemplary implementations, the motion and absolute locations of NLOS moving objects in cluttered environments can be estimated through tertiary reflections of pulsed illumination, using relative time differences of arrival at an array of receivers. Also, the size of NLOS moving objects can be estimated by backprojecting extremas of NLOS moving object time responses.
Abstract:
Provided are an apparatus and method for processing a light field image that is acquired and processed using a mask to spatially modulate a light field. The apparatus includes a lens, a mask to spatially modulate 4D light field data of a scene passing through the lens to include wideband information on the scene, a sensor to detect a 2D image corresponding to the spatially modulated 4D light field data, and a data processing unit to recover the 4D light field data from the 2D image to generate an all-in-focus image.
Abstract:
In exemplary implementations of this invention, light from a backlight is transmitted through two stacked LCDs and then through a diffuser. The front side of the diffuser displays a time-varying sequence of 2D images. Processors execute an optimization algorithm to compute optimal pixel states in the first and second LCDs, respectively, such that for each respective image in the sequence, the optimal pixel states minimize, subject to one or more constraints, a difference between a target image and the respective image. The processors output signals to control actual pixel states in the LCDs, based on the computed optimal pixel states. The 2D images displayed by the diffuser have a higher spatial resolution than the native spatial resolution of the LCDs. Alternatively, the diffuser may be switched off, and the device may display either (a) 2D images with a higher dynamic range than the LCDs, or (b) an automultiscopic display.
Abstract:
In exemplary implementations of this invention, an automultiscopic display device includes (1) one or more spatially addressable, light attenuating layers, and (2) a controller which is configured to perform calculations to control the device. In these calculations, tensors provide sparse, memory-efficient representations of a light field. The calculations include using weighted nonnegative tensor factorization (NTF) to solve an optimization problem. The NTF calculations can be sufficiently efficient to achieve interactive refresh rates. Either a directional backlight or a uniform backlight may be used. For example, the device may have (1) a high resolution LCD in front, and (2) a low resolution directional backlight. Or, for example, the device may have a uniform backlight and three or more LCD panels. In these examples, all of the LCDs and the directional backlight (if applicable) may be time-multiplexed.
Abstract:
In exemplary implementations, this invention is a tool for subjective assessment of the visual acuity of a human eye. A microlens or pinhole array is placed over a high-resolution display. The eye is brought very near to the device. Patterns are displayed on the screen under some of the lenslets or pinholes. Using interactive software, a user causes the patterns that the eye sees to appear to be aligned. The software allows the user to move the apparent position of the patterns. This apparent motion is achieved by pre-warping the position and angle of the ray-bundles exiting the lenslet display. As the user aligns the apparent position of the patterns, the amount of pre-warping varies. The amount of pre-warping required in order for the user to see what appears to be a single, aligned pattern indicates the lens aberration of the eye.
Abstract:
In exemplary implementations of this invention, a flat screen device displays a 3D scene. The 3D display may be viewed by a person who is not wearing any special glasses. The flat screen device displays dynamically changing 3D imagery, with a refresh rate so fast that the device may be used for 3D movies or for interactive, 3D display. The flat screen device comprises a stack of LCD layers with two crossed polarization filters, one filter at each end of the stack. One or more processors control the voltage at each pixel of each LCD layer, in order to control the polarization state rotation induced in light at that pixel. The processor employs an algorithm that models each LCD layer as a spatially-controllable polarization rotator, rather than a conventional spatial light modulator that directly attenuates light. Color display is achieved using field sequential color illumination with monochromatic LCDs.
Abstract:
In exemplary implementations, this invention comprises apparatus for retinal self-imaging. Visual stimuli help the user self-align his eye with a camera. Bi-ocular coupling induces the test eye to rotate into different positions. As the test eye rotates, a video is captured of different areas of the retina. Computational photography methods process this video into a mosaiced image of a large area of the retina. An LED is pressed against the skin near the eye, to provide indirect, diffuse illumination of the retina. The camera has a wide field of view, and can image part of the retina even when the eye is off-axis (when the eye's pupillary axis and camera's optical axis are not aligned). Alternately, the retina is illuminated directly through the pupil, and different parts of a large lens are used to image different parts of the retina. Alternately, a plenoptic camera is used for retinal imaging.
Abstract:
In an illustrative implementation of this invention, an optical pattern that encodes binary data is printed on a transparency. For example, the pattern may comprise data matrix codes. A lenslet is placed at a distance equal to its focal length from the optical pattern, and thus collimates light from the optical pattern. The collimated light travels to a conventional camera. For example, the camera may be meters distant. The camera takes a photograph of the optical pattern at a time that the camera is not focused on the scene that it is imaging, but instead is focused at infinity. Because the light is collimated, however, a focused image is captured at the camera's focal plane. The binary data in the pattern may include information regarding the object to which the optical pattern is affixed and information from which the camera's pose may be calculated.
Abstract:
In exemplary implements of this invention, a lens and sensor of a camera are intentionally destabilized (i.e., shifted relative to the scene being imaged) in order to create defocus effects. That is, actuators in a camera move a lens and a sensor, relative to the scene being imaged, while the camera takes a photograph. This motion simulates a larger aperture size (shallower depth of field). Thus, by translating a lens and a sensor while taking a photo, a camera with a small aperture (such as a cell phone or small point and shoot camera) may simulate the shallow DOF that can be achieved with a professional SLR camera. This invention may be implemented in such a way that programmable defocus effects may be achieved. Also, approximately depth-invariant defocus blur size may be achieved over a range of depths, in some embodiments of this invention.