Abstract:
The invention is directed to recording, transmitting, and displaying a three-dimensional image of a face of a user in a video stream. Reflected light from a curved or geometrically shaped screen is employed to provide multiple perspective views of the user's face that are transformed into the image, which is communicated to remotely located other users. A head mounted projection display system is employed to capture the reflective light. The system includes a frame, that when worn by a user, wraps around and grips the user's head. Also, at least two separate image capture modules are included on the frame and generally positioned relatively adjacent to the left and right eyes of a user when the system is worn. Each module includes one or more sensor components, such as cameras, that are arranged to detect at least reflected non-visible light from a screen positioned in front of the user.
Abstract:
Embodiments are directed toward measuring a three dimensional range to a target. A transmitter emits light toward the target. An aperture may receive light reflections from the target. The aperture may direct the reflections toward a sensor that comprises rows of pixels that have columns. The sensor is offset a predetermined distance from the transmitter. Anticipated arrival times of the reflections on the sensor are based on the departure times and the predetermined offset distance. A portion of the pixels are sequentially activated based on the anticipated arrival times. The target's three dimensional range measurement is based on the reflections detected by the portion of the pixels.
Abstract:
The invention is directed to recording, transmitting, and displaying a three-dimensional image of a face of a user in a video stream. Reflected light from a curved or geometrically shaped screen is employed to provide multiple perspective views of the user's face that are transformed into the image, which is communicated to remotely located other users. A head mounted projection display system is employed to capture the reflective light. The system includes a frame, that when worn by a user, wraps around and grips the user's head. Also, at least two separate image capture modules are included on the frame and generally positioned relatively adjacent to the left and right eyes of a user when the system is worn. Each module includes one or more sensor components, such as cameras, that are arranged to detect at least reflected non-visible light from a screen positioned in front of the user.
Abstract:
Embodiments are directed towards a system for enabling a user to view an image on a surface. The system may include projector(s), sensor, projection surface or screen, and processor. The projectors may project light for an image onto the surface. The sensor may detect light reflected off the surface. The surface may include multiple types of surface elements, such as multiple first elements positioned as border of a display area on the surface to provide feedback regarding the surface and multiple second elements positioned within the border of the display area to reflect the image to the user. The processor may determine characteristics of the border of the display area based on light reflected to the sensor from first elements. And it may modify parameters of the image based on the characteristics of the border of the display area.
Abstract:
Systems and methods for machine vision are presented. Such machine vision includes ego-motion, as well as the segmentation and/or classification of image data of one or more targets of interest. The projection and detection of scanning light beams that generate a pattern are employed. Real-time continuous and accurate spatial-temporal 3D sensing is achieved. The relative motion between an observer and a projection surface is determined. A combination of visible and non-visible patterns, as well as a combination of visible and non-visible sensor arrays is employed to sense 3D coordinates of target features, as well as acquire color image data to generate 3D color images of targets. Stereoscopic pairs of cameras are employed to generate 3D image data. Such cameras are dynamically aligned and calibrated. Information may be encoded in the transmitted patterns. The information is decoded upon detection of the pattern and employed to determine features of the reflecting surface.
Abstract:
Embodiments are directed towards detecting the three dimensional position of a position sensing device (PSD) utilizing a spot scanned across a remote surface. A trajectory map may be determined for a projection system. The trajectory map may identify a location of the spot at various times during the scan. A PSD may be arranged with a clear view of the remote surface. The PSD may observe at least three spots projected onto the remote surface utilizing three lines of sight that enable moment-in-time linear alignment between the spot and a sensor. Observation angles between each of the lines of sight may be determined. For each observed spot, a transition time may be determined and a location of the observed spot may be determined based on the trajectory map. A position of the PSD may be determined based on determined observed locations and the observation angles of the PSD.
Abstract:
A scanned projector and illumination system includes an energy-emitting source that is disposed in a projector and that emits energy beams out of the projector through an aperture. A scanning mirror is disposed in the projector and redirects energy beams therein. The scanning mirror moves such that the redirected energy beams form a scanning pattern with a scanning rate. A safety feature is disposed in the projector. The safety feature includes a fuse material. The energy beams move along the fuse material at the scanning rate. The safety feature modulates emission of the energy beams out of the projector through the aperture such that the energy beams are only emitted out of the projector through the aperture when the scanning rate of the energy beams is high enough to prevent the fuse material from reaching a threshold energy level at any location along the fuse material.
Abstract:
A folded optical element waveguide that allows a minimum width bezel to be used around the perimeter of a light-based touch screen display. The apparatus and method includes a touch screen and a waveguide substrate provided adjacent the touch screen. The waveguide substrate includes a plurality of waveguides and a plurality of optical elements provided adjacent the touch screen. The waveguides include an internally reflective surface to reflect light perpendicular to the surface of the touch screen. The emitting and detecting waveguides are thus folded and provided around the side edges of the display. As a result, the width of the bezel around the display can be minimized.
Abstract:
A device can convert electrical signals into modulated light signals and conduct those modulated light signals between components within the device or between the device and another device through at least a portion of the housing of the device that is transparent to the light wavelength of the modulate light signals.
Abstract:
A scanned projector and illumination system includes an energy-emitting source that is disposed in a projector and that emits energy beams out of the projector through an aperture. A scanning mirror is disposed in the projector and redirects energy beams therein. The scanning mirror moves such that the redirected energy beams form a scanning pattern with a scanning rate. A safety feature is disposed in the projector. The safety feature includes a fuse material. The energy beams move along the fuse material at the scanning rate. The safety feature modulates emission of the energy beams out of the projector through the aperture such that the energy beams are only emitted out of the projector through the aperture when the scanning rate of the energy beams is high enough to prevent the fuse material from reaching a threshold energy level at any location along the fuse material.