Abstract:
A raw image consists of a plurality of pixels in a plurality of color planes giving a color image. The raw image is split into at least a low frequency image and a high frequency image. At least the low frequency image is color corrected to produce a color corrected low frequency image. The color corrected low frequency image is combined with at least the high frequency image to give a final image which is of comparable resolution to the raw image but is color corrected.
Abstract:
A method of reconstructing an image captured as a stream of image data, for example as input received from a linear sensor in unconstrained scanning, comprises reconstructing the image in the form of a plurality of tiles. Each tile comprises a pixel grid of predetermined dimension representing a specific spatial region of the image. The tiles tessellate a rectilinear image space. Tiles can be created when required and compressed when no longer active, thus minimizing memory requirements. Devices utilizing this method are provided. The method is especially appropriate for use in an unconstrained hand scanner, but can also be applied to panoramic capture with a digital camera.
Abstract:
The user assistance system provides a system and method for capturing images of a document and interaction of a primary user with the document in an interaction session. Briefly described, one embodiment comprises an image capture means adapted to capture an initial image of the document and at least one subsequent, additional image of the document during an interaction session, the image being mapped to a known co-ordinate system, interaction capture means for capturing the interaction of a user with the document whereby to determine at least one co-ordinate of a pointer used for the interaction relative to the same co-ordinate system defined for the initial captured image, and processing means for determining an appropriate transform that maps the additional image onto the original image.
Abstract:
The color of a color image having at least two sets of image pixels having color and luminance values is corrected by generating, for each color value, a low spatial frequency monochrome image including a set of smoothed image pixels. A color correction function is applied to selected sub-sets of the image pixels to generate corresponding sub-sets of corrected, smoothed image pixels. The correction includes a contribution from smoothed image pixels in one or more of the other sets of smoothed image pixels. The sub-sets of corrected smoothed image pixels is completed by interpolation or extrapolation. Each set of corrected smoothed image pixels is used to generate a corrected color image by generating one or more high spatial frequency luminance images that are combined with each of the corrected low spatial frequency monochrome images to form a full color image.
Abstract:
The present invention relates to a method of reconstructing an image from data captured by a sensor, and is particularly applicable to the case of sequential capture in relative movement between a scanning device and the original image. The seanning device comprises navigation means for detecting the position of the sensor relative to the original image. A pixel grid for the reconstructed image is determined, and correspondence between the pixel grid and the sensor data is identified using sensor position detection data. The intensity of pixels is determined from sensor data selected as relevant to the pixel under consideration.
Abstract:
Digital and optical zoom are combined over a number of discrete digital zoom levels. Digital interpolation is provided during transition periods between the discrete digital zoom levels such that the total apparent zoom level appears continuous and uninterrupted.
Abstract:
The present invention provides an apparatus and method which facilitates the use of a printed or scribed document as an interface with a computer. The apparatus in one aspect comprises: a printed or scribed document bearing specialized calibration marks, the document being positioned on a work surface; a camera focused on the document for generating video signals representing the document in electronic form; and a processor linked to the camera for processing an image captured by the camera and configured to identify the calibration marks of the document in the captured image and then determine from the location of the calibration marks in the image a transformation between co-ordinates of features. In the image and corresponding co-ordinates of features in the document that compensates for the freely variable positioning of the document on the work surface.
Abstract:
A framing aid for a handheld document capture device such as a digital camera, comprising two pattern generators (10,20) generating convergent patterns (14, 24) that are in register on a target object plane. Triangulation between the two pattern generators (10,20) and using superimposed or complementary patterns (14,24) ensures that the hand held device (1) is correctly arranged at a predetermined range and orientation above a document to be captured, such that the document is accurately framed within the field of view of the capture device (1).
Abstract:
An image capture apparatus comprises a computer device, for example a personal computer, and a scanning device, for example a flat bed scanning device, configured to operate a two-scan image capture process followed by data processing for combining first and second images obtained in said two-step image capture process, and a three-step image capture process followed by data processing for combining first, second and third images obtained in said three-step image capture process for obtaining a combined full image of a document. Algorithms are disclosed for stitching first and second images together to obtain a combined image data. In both the two-step and three-step image capture operation and subsequent data processing, a document having a size larger than an image capture area of said image capture device can be combined to produce a combined image data representing the whole of the document in a fully automated manner without the need for user intervention in matching images.
Abstract:
An image processor receives color signals representing a color or black-and-white image, typically containing text and non-text areas. A sliding window or swath of the image is processed which progressively moves over the virtual image. A spatial filter is applied to sharpen the image and it is then classified into text and non-text regions. The data from the text regions is subjected to a black text enhancement process in which the color signal from a single channel (here the green channel) is thresholded against two thresholds, T1, T2. The lower (darker) threshold T2 identifies pixels for being set to black, whereas the threshold T1 identifies “support” pixels used in component connectivity analysis. Having defined a connected component using both T1 and T2 pixels, the color statistics of the pixels making up the component are analyzed to determine whether the component should be rendered black. If so, the image data is enhanced by snapping the T2 pixels to black, and snapping a halo of pixels around the black text component to white.