Abstract:
A computer peripheral that operates either as a computer mouse or as a scanner. The peripheral includes navigation sensors that generate information on motion of the device and an image array that captures successive image frames of an object being scanned. In a mouse mode, the peripheral periodically transfers readings from the navigation sensors to a computing device so that the computing device can track a position of the device. In a scanner mode, in addition to obtaining navigation information from the navigation sensors, the peripheral also captures image frames as it is moved across the object. Operation of the navigation sensors and image array may be synchronized such that an association between the image data and the navigation information may be generated and maintained as image frames are transferred to the computing device, even if some of the frames are dropped in transmission between the scanner-mouse and a computer.
Abstract:
Systems, apparatuses, and methods for a handheld image translation device are described herein. The handheld image translation device may include an image capture module to capture surface images of a medium and a positioning module to determine positioning information based at least in part on navigational measurements and/or the captured surface images. A print module of the handheld image translation device may cause print forming substances to be deposited based at least in part on the positioning information. Other embodiments may be described and claimed.
Abstract:
A computer peripheral that may operate as a scanner. The scanner captures image frames as it is moved across an object. The image frames are formed into a composite image based on computations in two processes. In a first process, fast track processing determines a coarse position of each of the image frames based on a relative position between each successive image frame and a respective preceding image determine by matching overlapping portions of the image frames. In a second process, fine position adjustments are computed to reduce inconsistencies from determining positions of image frames based on relative positions to multiple prior image frames. The peripheral may also act as a mouse and may be configured with one or more navigation sensors that can be used to reduce processing time required to match a successive image frame to a preceding image frame.
Abstract:
The present invention relates to an apparatus for creating a pattern on a workpiece sensitive to radiation, such as a photomask a display panel or a microoptical device. The apparatus may include a source for emitting light flashes, a spatial modulator having modulating elements (pixels), adapted to being illuminated by the radiation, and a projection system creating an image of the modulator on the workpiece. It may further include an electronic data processing and delivery system receiving a digital description of the pattern to be written, converting the pattern to modulator signals, and feeding the signals to the modulator. An electronic control system may be provided to control a trigger signal to compensate for flash-to-flash time jitter in the light source.
Abstract:
An image processing system and method is disclosed. The image processing system can be configured for use with a mouse scanner system operable to scan a document. The mouse scanner system includes a scanner built into a computer mouse and the image processing system includes a scanner software application operating on a computer. The scanner includes a positioning system operable to output position indicating data and an imaging system operable to output captured image data. The data is sent to the scanner software application where a feedback image is constructed and displayed on a display in real, or near real, time to allow the user to view what areas have been scanned. The scanner software application also constructs an output image that can be printed, saved or communicated.
Abstract:
A camera can be used to record multiple low resolution images of an object by shifting a camera lens relative to an image sensor of the camera. Each camera image recorded represents a portion of the object. A composite high resolution image of the object can be obtained by patching together the camera images by using various well known mosaicing, tiling, and/or stitching algorithms.
Abstract:
A camera-based document scanning system produces electronic versions of documents, based on a plurality of images of discrete portions of the documents. The system compares each pair of consecutive images and derives motion parameters that indicate the relative motion between each pair of consecutive images. The system utilizes the derived motion parameters to align and merge each image with respect to the previous images, thereby building a single, mosaic image of the document. In the illustrative embodiment, the motion parameters are derived by minimizing a sum of squared differences equation on a pixel-by-pixel basis.
Abstract:
An image capture apparatus comprises a computer device, for example a personal computer, and a scanning device, for example a flat bed scanning device, configured to operate a two-scan image capture process followed by data processing for combining first and second images obtained in said two-step image capture process, and a three-step image capture process followed by data processing for combining first, second and third images obtained in said three-step image capture process for obtaining a combined full image of a document. Algorithms are disclosed for stitching first and second images together to obtain a combined image data. In both the two-step and three-step image capture operation and subsequent data processing, a document having a size larger than an image capture area of said image capture device can be combined to produce a combined image data representing the whole of the document in a fully automated manner without the need for user intervention in matching images.
Abstract:
In the first stage, a merging position relationship (a rotation angle, and/or presence/absence of mirror image flipping) between two images is identified by using reduced images, and a rough overlapping region is detected. In the second stage, an exact overlapping position, an inclination, etc. are detected. In the third stage, the two images are merged by using the process results of the first and the second stages.
Abstract:
A pixel position specifying method in an image forming device, the method measuring positions of exposure beams of the respective exposure heads and specifying pixel positions of junctures of the exposure heads, the method includes: using a beam position detecting mechanism to measure the positions of the exposure beams; detecting a first exposure beam position on an image receiving surface provided at the beam position detecting mechanism, by turning on a first connecting pixel near a juncture of a first exposure head; detecting a second exposure beam position on the image receiving surface, by turning on a second connecting pixel near a juncture of a second exposure head and moving the beam position detecting mechanism in the Y axis direction; and specifying connecting pixel positions from a moving amount of the beam position detecting mechanism in the Y axis direction and positions of exposure beams of the exposure heads.