Abstract:
A LIDAR system includes a scanner; a receiver; and one or more processor devices to perform actions, including: scanning a continuous light beam over the field of view in a first scan pass; detecting photons of the continuous light beam that are reflected from one or more objects; determining a coarse range to the one or more objects based on times of departure of the photons of the continuous light beam and times of arrival of the photons at the receiver; scanning light pulses over the field of view in a second scan pass; detecting photons from the light pulses that are reflected from the one or more objects; and determining a refined range to the one or more objects based on times of departure of the photons of the light pulses and times of arrival of the photons at the receiver.
Abstract:
Systems and methods for machine vision are presented. Such machine vision includes ego-motion, as well as the segmentation and/or classification of image data of one or more targets of interest. The projection and detection of scanning light beams that generate a pattern are employed. Real-time continuous and accurate spatial-temporal 3D sensing is achieved. The relative motion between an observer and a projection surface is determined. A combination of visible and non-visible patterns, as well as a combination of visible and non-visible sensor arrays is employed to sense 3D coordinates of target features, as well as acquire color image data to generate 3D color images of targets. Stereoscopic pairs of cameras are employed to generate 3D image data. Such cameras are dynamically aligned and calibrated. Information may be encoded in the transmitted patterns. The information is decoded upon detection of the pattern and employed to determine features of the reflecting surface.
Abstract:
Systems and methods for machine vision are presented. Such machine vision includes ego-motion, as well as the segmentation and/or classification of image data of one or more targets of interest. The projection and detection of scanning light beams that generate a pattern are employed. Real-time continuous and accurate spatial-temporal 3D sensing is achieved. The relative motion between an observer and a projection surface is determined. A combination of visible and non-visible patterns, as well as a combination of visible and non-visible sensor arrays is employed to sense 3D coordinates of target features, as well as acquire color image data to generate 3D color images of targets. Stereoscopic pairs of cameras are employed to generate 3D image data. Such cameras are dynamically aligned and calibrated. Information may be encoded in the transmitted patterns. The information is decoded upon detection of the pattern and employed to determine features of the reflecting surface.
Abstract:
Embodiments are directed toward measuring a three dimensional range to a target. A transmitter emits light toward the target. An aperture may receive light reflections from the target. The aperture may direct the reflections toward a sensor that comprises rows of pixels that have columns. The sensor is offset a predetermined distance from the transmitter. Anticipated arrival times of the reflections on the sensor are based on the departure times and the predetermined offset distance. A portion of the pixels are sequentially activated based on the anticipated arrival times. The target's three dimensional range measurement is based on the reflections detected by the portion of the pixels.
Abstract:
A system projects a user-viewable, computer-generated or -fed image, wherein a head-mounted projector is used to project an image onto a retro-reflective surface, so only the viewer can see the image. The projector is connected to a computer that contains software to create virtual 2-D and or 3-D images for viewing by the user. Further, one projector each is mounted on either side of the user's head, and, by choosing for example a retro angle of less than about 10 degrees, each eye can only see the image of one of the projectors at a give distance up to 3 meters, in this example, from the retro-reflective screen. The retro angle used may be reduced with larger viewing distance desired. These projectors use lasers to avoid the need for focusing, and in some cases there projectors use instead of lasers highly collimated LED light sources to avoid the need for focusing.
Abstract:
Embodiments are directed towards a system for enabling a user to view an image on a surface. The system may include projector(s), sensor, projection surface or screen, and processor. The projectors may project light for an image onto the surface. The sensor may detect light reflected off the surface. The surface may include multiple types of surface elements, such as multiple first elements positioned as border of a display area on the surface to provide feedback regarding the surface and multiple second elements positioned within the border of the display area to reflect the image to the user. The processor may determine characteristics of the border of the display area based on light reflected to the sensor from first elements. And it may modify parameters of the image based on the characteristics of the border of the display area.
Abstract:
Embodiments are directed towards a system for enabling a user to view an image on a surface. The system may include projector(s), sensor, projection surface or screen, and processor. The projectors may project light for an image onto the surface. The sensor may detect light reflected off the surface. The surface may include multiple types of surface elements, such as multiple first elements positioned as border of a display area on the surface to provide feedback regarding the surface and multiple second elements positioned within the border of the display area to reflect the image to the user. The processor may determine characteristics of the border of the display area based on light reflected to the sensor from first elements. And it may modify parameters of the image based on the characteristics of the border of the display area.
Abstract:
An image projection device for displaying an image onto a remote surface. The image projection device employs a scanner to project image beams of visible light and tracer beams of light onto a remote surface to form a display of the image. The device also employs a light detector to sense at least the reflections of light from the tracer beam pulses incident on the remote surface. The device employs the sensed tracer beam light pulses to predict the trajectory of subsequent image beam light pulses and tracer beam light pulses that form a display of the image on the remote surface in a pseudo random pattern. The trajectory of the projected image beam light pulses can be predicted so that the image is displayed from a point of view that can be selected by, or automatically adjusted for, a viewer of the displayed image.
Abstract:
Embodiments are directed towards detecting the three dimensional position of a position sensing device (PSD) utilizing a spot scanned across a remote surface. A trajectory map may be determined for a projection system. The trajectory map may identify a location of the spot at various times during the scan. A PSD may be arranged with a clear view of the remote surface. The PSD may observe at least three spots projected onto the remote surface utilizing three lines of sight that enable moment-in-time linear alignment between the spot and a sensor. Observation angles between each of the lines of sight may be determined. For each observed spot, a transition time may be determined and a location of the observed spot may be determined based on the trajectory map. A position of the PSD may be determined based on determined observed locations and the observation angles of the PSD.
Abstract:
An optical switch is implemented with one or more cantilevered optical channels, which are formed in a flexible waveguide structure, and an actuator which is connected to the cantilevered optical channels, to position the cantilevered optical channels to direct an optical signal along one of a number of optical pathways.