Abstract:
Pre-image-acquisition information is obtained by a digital camera and transmitted to a system external to the digital camera. The system is configured to provide image-acquisition settings to the digital camera. In this regard, the digital camera receives the image-acquisition settings from the external system and performs an image-acquisition sequence based at least upon the received image-acquisition settings. Accordingly, the determination of image-acquisition settings can be performed remotely from the digital camera, where data-processing resources can greatly exceed those within the digital camera.
Abstract:
Multiple images are captured where the exposure times for some of the images overlap and the images are spatially overlapped. Charge packets are transferred from one or more portions of pixels after particular integration periods, thereby enabling the portion or portions of pixels to begin another integration period while one or more other portions of pixels continue to integrate charge. Charge packets may be binned during readout of the images from the image sensor. Comparison of two or more images having different lengths of overlapping or non-overlapping exposure periods provides motion information. The multiple images can then be aligned to compensate for motion between the images and assembled into a combined image with an improved signal to noise ratio and reduced motion blur.
Abstract:
This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content wherein the optical assembly comprises a photochromic layer and a heater layer disposed on a see-through lens of the optical assembly, wherein the photochromic layer is heated by the heater layer to accelerate its transition from dark to clear.
Abstract:
This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the optical assembly comprises a light transmissive illumination system and an LED lighting system coupled to a light transmissive illumination system of the optical assembly. A grating of the illumination system directs light from the LED lighting system to uniformly irradiate a reflective image display to produce an image that is reflected through the illumination system to provide the displayed content to the user.
Abstract:
This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the optical assembly comprises an optically flat film, disposed at an angle in front of a user's eye, that reflects and transmits a portion of image light and transmits scene light from a see-through view of the surrounding environment, so that a combined image comprised of portions of the image light and the transmitted scene light is provided to a user's eye.
Abstract:
This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the optical assembly comprises a partially reflective, partially transmitting optical element that reflects a portion of image light from the image source and transmits scene light from a see-through view of the surrounding environment, so that a combined image comprised of portions of the reflected image light and the transmitted scene light is provided to a user's eye.
Abstract:
A see-through head mounted display apparatus with reduced eyeglow is disclosed. Two images of a scene are combined and presented to a user, the combined image including portions of reflected image light and light from a see-through view of an external environment. The apparatus includes a light control element to block escaping portions of image light and reflected portions of scene light, while allowing incoming scene light to be transmitted from the external environment. The images are produced using a partially reflecting mirror and a light control element. A portion of scene light is transmitted through the partially reflecting mirror and is combined with a portion of image light reflected from the partially reflecting mirror. A light control element is used to block a portion of the image light and a portion of the scene light to reduce eyeglow.
Abstract:
A method of capturing a video of a scene depending on the speed of motion in the scene, includes capturing a video of the scene; determining the relative speed of motion within a first region of the video of the scene with respect to the speed of motion within a second region of the video of the scene; and causing a capture rate of the first region of the video of the scene to be greater than a capture rate of the second region of the video of the scene, or causing an exposure time of the first region to be less than exposure time of the second region.
Abstract:
Video communication devices and methods are provided. The device has an image display device; a first image capture device that acquires video images depicting a wide field of view within the local environment; a second image capture device that acquires video images depicting a narrow field of view within the wide field of view; a communication system that is adapted to use a communication network to transmit outgoing video images; and a computer causing the capture of video images during a communication event, and further being adapted to identify a video context for the communication event; wherein said computer uses a scene analysis algorithm to examine concurrent video images from the image capture device based upon the identified video context to determine the extent to which video images from the first image capture device and the second image capture device are to be incorporated into the outgoing video images.
Abstract:
Multiple images are captured where the exposure times for some of the images overlap and the images are spatially overlapped. Charge packets are transferred from one or more portions of pixels after particular integration periods, thereby enabling the portion or portions of pixels to begin another integration period while one or more other portions of pixels continue to integrate charge. Charge packets may be binned during readout of the images from the image sensor. Comparison of two or more images having different lengths of overlapping or non-overlapping exposure periods provides motion information. The multiple images can then be aligned to compensate for motion between the images and assembled into a combined image with an improved signal to noise ratio and reduced motion blur.