Abstract:
An apparatus for projecting an output image on a display surface includes a processing unit. The processing unit includes a microprocessor, a memory and an I/O interface connected by buses. A projector sub-system coupled to the processing unit is for display output images on the display surface. A camera sub-system couple to the processing unit is for acquiring input images reflecting a geometry of the display surface. The camera sub-system is in a fixed physical relationship to the projector sub-system. Internal sensors coupled to the processing unit are for determining an orientation of the projector sub-system and the camera sub-system with respect to the display surface.
Abstract:
A method determines a largest rectangle on a display surface. A polygon L is drawn on a first depth plane having a depth z=1 in a depth buffer. A rectangle R is drawn with a predetermined aspect ratio on a second depth plane having a depth z=0. A center of projection is determined with a minimum depth z in a range [0,1] that maps the rectangle R into a largest rectangle S in the first depth plane so that the rectangle S remains completely inside the polygon L.
Abstract:
A method corrects keystoning in a projector arbitrarily oriented with respect to a display surface. An elevation angle, a roll angle, and an azimuth angle of an optical axis of the projector are measured with respect to the display surface. A planar projective transformation matrix is determined from the elevation, roll, and azimuth angles. A source image to be projected by the projector is warped according to the planar projective transformation, and then projected onto the display surface.
Abstract:
In illustrative implementations of this invention, light sources illuminate a surface with multi-spectral, multi-directional illumination that varies in direction, wavelength, coherence and collimation. One or more cameras capture images of the surface while the surface is illuminated under different lighting conditions. One or more computers take, as input, data indicative of or derived from the images, and determine a classification of the surface. Based on the computed classification, the computers output signals to control an I/O device, such that content displayed by the I/O device depends, at least in part, on the computed classification. In illustrative implementations, this invention accurately classifies a wide range of surfaces, including transparent surfaces, specular surfaces, and surfaces with few features.
Abstract:
In exemplary implementations of this invention, a camera can capture multiple millions of frames per second, such that each frame is 2D image, rather than a streak. A light source in the camera emits ultrashort pulses of light to illuminate a scene. Scattered light from the scene returns to the camera. This incoming light strikes a photocathode, which emits electrons, which are detected by a set of phosphor blocks, which emit light, which is detected by a light sensor. Voltage is applied to plates to create an electric field that deflects the electrons. The voltage varies in a temporal “stepladder” pattern, deflecting the electrons by different amounts, such that the electrons hit different phosphor blocks at different times during the sequence. Each phosphor block (together with the light sensor) captures a separate frame in the sequence. A mask may be used to increase resolution.
Abstract:
In exemplary implementations of this invention, a 3D range camera “looks around a corner” to image a hidden object, using light that has bounced (reflected) off of a diffuse reflector. The camera can recover the 3D structure of the hidden object.
Abstract:
In exemplary implementations of this invention, two LCD screens display a multi-view 3D image that has both horizontal and vertical parallax, and that does not require a viewer to wear any special glasses. Each pixel in the LCDs can take on any value: the pixel can be opaque, transparent, or any shade between. For regions of the image that are adjacent to a step function (e.g., a depth discontinuity) and not adjacent to a sharp corner, the screens display local parallax barriers comprising many small slits. The barriers and the slits tend to be oriented perpendicular to the local angular gradient of the target light field. In some implementations, the display is optimized to seek to minimize the Euclidian distance between the desired light field and the actual light field that is produced. Weighted, non-negative matrix factorization (NMF) is used for this optimization.
Abstract:
In exemplary implementations of this invention, a light source illuminates a scene and a light sensor captures data about light that scatters from the scene. The light source emits multiple modulation frequencies, either in a temporal sequence or as a superposition of modulation frequencies. Reference signals that differ in phase are applied to respective subregions of each respective pixel. The number of subregions per pixel, and the number of reference signals per pixel, is preferably greater than four. One or more processors calculate a full cross-correlation function for each respective pixel, by fitting light intensity measurements to a curve, the light intensity measurements being taken, respectively, by respective subregions of the respective pixel. The light sensor comprises M subregions. A lenslet is placed over each subregion, so that each subregion images the entire scene. At least one temporal sequence of frames is taken, one frame per subregion.
Abstract:
In exemplary implementations of this invention, a set of two scanning mirrors scans the one dimensional field of view of a streak camera across a scene. The mirrors are continuously moving while the camera takes streak images. Alternately, the mirrors may only between image captures. An illumination source or other captured event is synchronized with the camera so that for every streak image the scene looks different. The scanning assures that different parts of the scene are captured.
Abstract:
In illustrative implementations of this invention, multi-path analysis of transient illumination is used to reconstruct scene geometry, even of objects that are occluded from the camera. An ultrafast camera system is used. It comprises a photo-sensor (e.g., accurate in the picosecond range), a pulsed illumination source (e.g. a femtosecond laser) and a processor. The camera emits a very brief light pulse that strikes a surface and bounces. Depending on the path taken, part of the light may return to the camera after one, two, three or more bounces. The photo-sensor captures the returning light bounces in a three-dimensional time image I(x,y,t) for each pixel. The camera takes different angular samples from the same viewpoint, recording a five-dimensional STIR (Space Time Impulse Response). A processor analyzes onset information in the STIR to estimate pairwise distances between patches in the scene, and then employs isometric embedding to estimate patch coordinates.