Abstract:
Methods and apparatus for reduced bandwidth pulse width modulation are disclosed. A system includes a digital controller circuit coupled to a data interface, the digital controller circuit configured to receive image data for display and further configured to encode line data for transmission to a spatial light modulator using a data compression scheme; and the spatial light modulator coupled to the data interface and configured to receive encoded data and to decode the encoded data to produce unencoded data corresponding to pixel data for display on an array of pixel elements in the spatial light modulator; wherein data transmitted from the digital controller circuit to the spatial light modulator further comprises encoded data that is formed from bit planes using a data compression scheme to form partial lines of data. Additional methods and apparatus are disclosed.
Abstract:
A controller includes a frame memory configured to store an image frame, a frame memory controller coupled to the frame memory and configured to obtain image data from the image frame. The image data is associated with a color component of the image frame. The controller also includes a dither noise mask generator configured to provide dither noise masks according to dither noise levels for the image data, and a bit plane generator coupled to the frame memory controller and the dither noise mask generator and configured to generate bit planes based on the dither noise masks for the image data.
Abstract:
A system having a color filter having first and second segments. The first and second segments allow respective first and second wavelengths to pass through to a spatial light modulator. The first and second segments also reflect second and first wavelengths, respectively. The reflected first and second wavelengths are recycled and directed towards the color filter.
Abstract:
In described examples, a system (e.g., a security system or a vehicle operator assistance system) is configured to configure a phased spatial light modulator (SLM) to generate a diffraction pattern. A coherent light source is optically coupled to direct coherent light upon the SLM. The SLM is configured to project diffracted coherent light toward a region of interest. An optical element is configured to focus the diffracted coherent light toward the at least one region of interest.
Abstract:
A method to compress an image includes assigning each pixel of the image to a cluster based on a red-green-blue (RGB) location of the pixel. The method also includes updating a centroid of the cluster after each pixel is assigned, based at least in part on the RGB location of the pixel, where the centroid is an RGB location. The method includes replacing each pixel in the image with an RGB value of the centroid of the cluster to which the pixel is assigned. The method also includes instructing a display to display a compressed image where, in the compressed image, each pixel in the image is replaced with the RGB value of the centroid of the cluster to which the pixel is assigned.
Abstract:
A method includes projecting an image onto a projection surface through a projection lens of a projector, where the image comprises a fiducial marker. The method also includes capturing a point cloud of the fiducial marker with a camera, and generating a distortion map of projection lens distortion based at least in part on the point cloud. The method also includes generating a correction map for the projection lens, and applying the correction map to a video signal input to the projector.
Abstract:
Described examples include an optical apparatus having a first lens, a first optical element having a first aperture, a second lens, and a second optical element having a second aperture. The optical apparatus includes a third lens having a first portion to receive projected light from the first lens through the first aperture and to project the projected light onto a target. Also, the third lens has a second portion to receive reflected light reflected from the target and to provide the reflected light to the second lens through the second aperture.
Abstract:
Described examples include an imager includes a light source; a spatial light modulator to receive light from the light source and to provide patterned light to illuminate an object; a sensor to receive a first and an offset reflected light from reflection of the patterned light off an object; and a processor to receive sensed images of the first reflected light and the offset reflected light and apply a deconvolution to a combined image including a combination of the sensed images of the first reflected light and the offset reflected light to generate the combined image having pixel density greater than the sensed images of the first reflected light and the offset reflected light, wherein the processor is configured to determine a position of at least one point on the object by triangulation between the spatial light modulator and the sensor using the patterned light and the combined image.
Abstract:
A video display system is configured to receive a sequence of image frames. Each frame is divided into a set of blocks. A center of mass is calculated for each block in a first frame and is saved for all blocks in the first frame. A center of mass is calculated for each block in a second frame. Motion between the first frame and the second frame is detected by comparing the center of mass of each block in the second frame to the center of mass of the corresponding block in the first frame, in which a still block is detected when a corresponding block in the first frame and the second frame have a same center of mass, and in which motion in a block is detected when a corresponding block in the first frame and the second frame have a different center of mass.
Abstract:
An apparatus includes a first camera configured to capture a first image being displayed, a second camera configured to capture a second image being displayed, and a processor configured to generate a pair-wise homography transform for the first camera and the second camera, and map, based on the pair-wise homography transform, the second image from a second frame of reference of the second camera to a first frame of reference of the first camera. The processor is further configured to determine a first corrected quadrilateral for the first image and a second corrected quadrilateral for the second image in the first frame of reference, and project, based on the pair-wise homography transform, the second corrected quadrilateral from the first frame of reference to the second frame of reference. The quadrilaterals are then processed to warp respective images for geometric correction before projecting the images by respective projectors.