Abstract:
A computer implemented method stimulates motion of a static 3D physical object in a static scene by first acquiring a 3D graphics model of the 3D physical object and the scene. A projector is registered with the 3D physical object, the scene and the 3D model. The model is then segmented into a plurality of parts, and each part is edited with graphics authoring tools to reflect a desired appearance and virtual motion of the part. The edited parts are rendered and projected, in real-time, as a video onto the 3D physical object and scene to give the 3D physical object and the scene the desired appearance and virtual motion.
Abstract:
A system determines correspondence between locations on a display surface and pixels in an output image of a projector. The display surface can have an arbitrary shape and pose. Locations of known coordinates are identified on the display surface. Each location is optically coupled to a photo sensor by an optical fiber installed in a throughhole in the surface. Known calibration patterns are projected, while sensing directly an intensity of light at each location for each calibration pattern. The intensities are used to determine correspondences between the locations and pixels in an output image of the projector so that projected images can be warped to conform to the display surface.
Abstract:
A method enhances an output image of a 3D object. A set of input images are acquired of a 3D object. Each one of the input images is illuminated by a different one of a set of lights placed at different positions with respect to the 3D object. Boundaries of shadows are detected in the set of input images by comparing the set of input images. The boundaries of shadows that are closer to a direction of the set of lights are marked as depth edge pixels.
Abstract:
A computer implemented method registers an image with a 3D physical object by first acquiring a 3D graphics model of an object. Multiple 3D calibration points a surface of the object and corresponding 3D model calibration points in the 3D graphics model are identified. The object is illuminated with a calibration image using a projector at a fixed location. The calibration image is aligned with each of the 3D calibration points on the surface of the 3D physical object to identify corresponding 2D pixels in the calibration image, and then a transformation between the 2D calibration pixels and the corresponding 3D model calibration points is determined to register the projector with the 3D physical object.
Abstract:
A system determines correspondence between locations on a display surface and pixels in an output image of a projector. The display surface can have an arbitrary shape and pose. Locations with know coordinates are identified on the display surface. Each location is optically coupled to a photo sensor by an optical fiber installed in a throughhole in the surface. Known calibration patterns are projected, while sensing directly an intensity of light at each location for each calibration pattern. The intensities are used to determine correspondences between the of locations and pixels in an output image of the projector so that projected images can warped to conform to the display surface.
Abstract:
A method projects one or more image onto a curved display surface. First, a predetermined structured light pattern is projected onto the display surface. A stereo pair of images is acquired of the projected images on the display surface. Then, a quadric transfer function between the predetermined images and the stereo pair of images, via the display surface, is determined. Thus, an arbitrary output image can be warped according to the quadric transfer function so that when it is projected onto the display surface it appears correct.
Abstract:
An interactive display system includes a sensor for sensing a relationship between a mobile coordinate frame fixed to a moving projector, and a stable coordinate frame fixed to a stationary display surface in the real world. An output image to be projected on the display surface is partitioned into a black image portion having a fixed relationship to the mobile coordinate frame, a stable image portion within the block image portion having a fixed relationship to the stable coordinate frame, and a mobile image portion within the stable image portion having a fixed relationship to the mobile coordinate frame. The mobile portion can be used as a pointer within the stable portion in a mouse-cursor like manner.
Abstract:
A method forms a cluster from a set of projectors. Each projector in the set includes a projector sub-system in a fixed physical relationship to a camera sub-system, and a communication sub-system for sending and receiving messages. A calibrate message is received in the projectors via the communications sub-system. A ready message is broadcast by the projectors using the communications sub-system. A structured pattern is projected sequentially by each of the projectors on a display surface using the projector sub-system. An input image of the structured pattern is acquired sequentially by each of the projectors using the camera sub-system. The projectors are globally aligned with each other and the display surface according to the input images.
Abstract:
A time-of-flight camera images an object around a corner or through a diffuser. In the case of imaging around a corner, light from a hidden target object reflects off a diffuse surface and travels to the camera. Points on the diffuse surface function as a virtual sensors. In the case of imaging through a diffuser, light from the target object is transmitted through a diffusive media and travels to the camera. Points on a surface of the diffuse media that is visible to the camera function as virtual sensors. In both cases, a computer represents phase and intensity measurements taken by the camera as a system of linear equations and solves a linear inverse problem to (i) recover an image of the target object; or (ii) to compute a 3D position for each point in a set of points on an exterior surface of the target object.
Abstract:
In exemplary implementations of this invention, light from a light field projector is transmitted through an angle-expanding screen to create a glasses-free, 3D display. The display can be horizontal-only parallax or full parallax. In the former case, a vertical diffuser may positioned in the optical stack. The angle-expanding screen may comprise two planar arrays of optical elements (e.g., lenslets or lenticules) separated from each other by the sum of their focal distances. Alternatively, a light field projector may project light rays through a focusing lens onto a diffuse, transmissive screen. In this alternative approach, the light field projector may comprise two spatial light modulators (SLMs). A focused image of the first SLM, and a slightly blurred image of the second SLM, are optically combined on the diffuser, creating a combined image that has a higher spatial resolution and a higher dynamic range than either of two SLMs.