Abstract:
An image capture user interface (1732) receives an image of an area of a user interface selected by a user and translates the image into operations performable by a computer. The user interface (1732) is comprised of graphic entities and embedded code. The user places an image capture device (1710), such as a camera pen, on or near a graphic entity of the user interface (1732), and presses a button (1714) on the image capture device (1710) indicating selection of the graphic entity. In response to the button, an image is captured that corresponds to the graphic entity selected by the user. The image includes embedded code, which is analyzed to develop an image capture code corresponding to the captured image area. The image capture code is then mapped to a selection code corresponding to the graphic entity selected by the user. The user may then make other selections. The selection codes are processed for a particular syntax, and a computer operation is performed when a selection code, or combination of selection codes, is received which indicate that an operation is to be performed. In other embodiments, mapping of image capture codes to selection codes and syntax processing may be performed in accordance with a particular context.
Abstract:
A mouse (1752) incorporating a camera captures an image of embedded data from a substrate (1732) under the mouse. The embedded data in the image is decoded to determine address or location information coded in the embedded data. Based on the decoded information and other user input signals, such as mouse button selection signals, the computer executes operations. The mouse (1752) also has a display (1730) controlled by the computer for providing visual feedback to a user. The display might generate an image of the substrate area under the mouse, making it appear to the user as if they are looking through the mouse directly onto the substrate. The display may also generate visual feedback regarding operations occurring in the computer, such as selection or other computer operations.
Abstract:
Described herein is a process for facilitating the recovery of data from an embedded data pattern (53) on a recording medium (22) through the use of an appropriately sized capture window (56) that is randomly positioned within the data pattern (53). The embedded data pattern (53) is composed of a plurality of identical, one dimensionally or two dimensionally regularly tiled embedded data blocks (51) which contain sufficient spatial addressing information to permit the logical reconstruction of a complete data block (51) from any set of fragments that collectively provide a full cover for the surface area of any one tile. The data pattern (53) is formed by repeating the data blocks (51) along tiling vectors (Tx, Ty). To this end, the capture window (56) is sized to include a shape which is completely registered with the data pattern (53) and which is capable of tiling the recording medium (22) in accordance with the tiling vectors (Tx, Ty).
Abstract:
Microaddressable printers and other types of display systems are provided for rendering two dimensional images on high gamma, photosensitive recording media. These systems are microaddressable because they are operated in an overscanned mode to render images by scanning one or more intensity modulated scan spots over a high gamma, photosensitive recording medium in accordance with a scan pattern that causes the spot or spots to superimpose multiple discrete exposures on the recording medium on centers that are separated by a pitch distance that is significantly less than the effective spatial diameter of the scan spot or spots (e. g., the full width/half max. diameter of a gaussian scan spot). Overscanned systems have substantially linear addressability responses, so boundary scans that are intensity modulated in accordance with preselected offset values are used by these systems for spatially positioning the transitions that are contained by the images they render to a sub-pitch precision.
Abstract:
This invention provides self-clocking glyph shape codes for encoding digital data (35) in the shapes of glyphs (36) that are suitable for printing on hardcopy recording media. Advantageously, the glyphs (36) are selected so that they tend not to degrade into each other when they are degraded and/or distorted as a result, for example, of being photocopied, transmitted via facsimile, and/or scanned-in to an electronic document processing system. Moreover, for at least some applications, the glyphs (36) desirably are composed of printed pixel patterns containing nearly the same number of ON pixels and nearly the same number of OFF pixels, such that the code that is rendered by printing such glyphs (36) on substantially uniformly spaced centers appears to have a generally uniform texture. In the case of codes printed at higher spatial densities, this texture is likely to be perceived as a generally uniform gray tone. Binary image processing and convolution filtering techniques for decoding such codes also are disclosed, but this application focuses on the codes.
Abstract:
The spatial addressing capacity of a discrete optical image bar (41) is increased by providing means (51) for translating the position of its optical footprint laterally relative to its output image plane as a function of time, thereby enabling the image bar to incoherently superimpose on the image plane (13) a plurality of independent pixel patterns which are laterally offset from one another by a distance that is less than the centre-to-centre spacing of the pixels of any one of those patterns. In line printers and the like where a recording medium (13) is exposed to successive pixel patterns as it is advancing in a cross-line direction with respect to a linear image bar, provision may be made for partially or completely compensating for such cross-line motion. This cross-line compensation - (ΔX) may be used independently to cause the image bar to overwrite successive pixel patterns, or it may be combined with the lateral interlacing of the pixel patterns to increase the spatial addressing capacity of the image bar.
Abstract:
A mouse (1752) incorporating a camera captures an image of embedded data from a substrate (1732) under the mouse. The embedded data in the image is decoded to determine address or location information coded in the embedded data. Based on the decoded information and other user input signals, such as mouse button selection signals, the computer executes operations. The mouse (1752) also has a display (1730) controlled by the computer for providing visual feedback to a user. The display might generate an image of the substrate area under the mouse, making it appear to the user as if they are looking through the mouse directly onto the substrate. The display may also generate visual feedback regarding operations occurring in the computer, such as selection or other computer operations.