Abstract:
Virtual controllers for visual displays are described. In one implementation, a camera captures an image of hands against a background. The image is segmented into hand areas and background areas. Various hand and finger gestures isolate parts of the background into independent areas, which are then assigned control parameters for manipulating the visual display. Multiple control parameters can be associated with attributes of multiple independent areas formed by two hands, for advanced control including simultaneous functions of clicking, selecting, executing, horizontal movement, vertical movement, scrolling, dragging, rotational movement, zooming, maximizing, minimizing, executing file functions, and executing menu choices.
Abstract:
The interactive and shared surface technique described herein employs hardware that can project on any surface, capture color video of that surface, and get depth information of and above the surface while preventing visual feedback (also known as video feedback, video echo, or visual echo). The technique provides N-way sharing of a surface using video compositing. It also provides for automatic calibration of hardware components, including calibration of any projector, RGB camera, depth camera and any microphones employed by the technique. The technique provides object manipulation with physical, visual, audio, and hover gestures and interaction between digital objects displayed on the surface and physical objects placed on or above the surface. It can capture and scan the surface in a manner that captures or scans exactly what the user sees, which includes both local and remote objects, drawings, annotations, hands, and so forth.
Abstract:
Effects of undesired infrared light are reduced in an imaging system using an infrared light source. The desired infrared light source is activated and a first set of imaging data is captured during a first image capture interval. The desired infrared light source is then deactivated, and a second set of image data is captured during a second image capture interval. A composite set of image data is then generated by subtracting from first values in the first set of image data corresponding second values in the second set of image data. The composite set of image data thus includes a set of imaging where data all infrared signals are collected, including both signals resulting from the IR source and other IR signals, from which is subtracted imaging in which no signals result from the IR course, leaving image data including signals resulting only from the IR source.
Abstract:
Effects of undesired infrared light are reduced in an imaging system using an infrared light source. The desired infrared light source is activated and a first set of imaging data is captured during a first image capture interval. The desired infrared light source is then deactivated, and a second set of image data is captured during a second image capture interval. A composite set of image data is then generated by subtracting from first values in the first set of image data corresponding second values in the second set of image data. The composite set of image data thus includes a set of imaging where data all infrared signals are collected, including both signals resulting from the IR source and other IR signals, from which is subtracted imaging in which no signals result from the IR course, leaving image data including signals resulting only from the IR source.
Abstract:
A unique system and method is provided that facilitates pixel-accurate targeting with respect to multi-touch sensitive displays when selecting or viewing content with a cursor. In particular, the system and method can track dual inputs from a primary finger and a secondary finger, for example. The primary finger can control movement of the cursor while the secondary finger can adjust a control-display ratio of the screen. As a result, cursor steering and selection of an assistance mode can be performed at about the same time or concurrently. In addition, the system and method can stabilize a cursor position at a top middle point of a user's finger in order to mitigate clicking errors when making a selection.
Abstract:
A unique system and method that facilitates extending input/output capabilities for resource deficient mobile devices and interactions between multiple heterogeneous devices is provided. The system and method involve an interactive surface to which the desired mobile devices can be connected. The interactive surface can provide an enhanced display space and customization controls for mobile devices that lack adequate displays and input capabilities. In addition, the interactive surface can be employed to permit communication and interaction between multiple mobile devices that otherwise are unable to interact with each other. When connected to the interactive surface, the mobile devices can share information, view information from their respective devices, and store information to the interactive surface. Furthermore, the interactive surface can resume activity states of mobile devices that were previously communicating upon re-connection to the surface.
Abstract:
A system described herein includes an acquirer component that acquires an electronic document that comprises text in a first language, wherein the acquirer component acquires the electronic document based at least in part upon a physical object comprising the text contacting or becoming proximate to the interactive display of the surface computing device. The system also includes a language selector component that receives an indication of a second language from a user of the surface computing device and selects the second language. A translator component translates the text in the electronic document from the first language to the second language, and a formatter component formats the electronic document for display to the user on the interactive display of the surface computing device, wherein the electronic document comprises the text in the second language.
Abstract:
Virtual controllers for visual displays are described. In one implementation, a camera captures an image of hands against a background. The image is segmented into hand areas and background areas. Various hand and finger gestures isolate parts of the background into independent areas, which are then assigned control parameters for manipulating the visual display. Multiple control parameters can be associated with attributes of multiple independent areas formed by two hands, for advanced control including simultaneous functions of clicking, selecting, executing, horizontal movement, vertical movement, scrolling, dragging, rotational movement, zooming, maximizing, minimizing, executing file functions, and executing menu choices.
Abstract:
The claimed subject matter relates to architectures that can provide rich features associated with information-based collaborative searches by leveraging a multi-touch surface computing-based display. In particular, a first architecture can include a multi-touch surface configured to support interactivity with multiple collocated users simultaneously. Based upon such interaction, the first architecture can transmit to a search engine a multiuser surface identifier and a set of search terms input by collocated users that share a collaborative task. In response, the architecture can receive a set of search results from a second architecture, and present those results to the multi-touch surface in a variety of ways. The second architecture can relate to a search engine that can process the search terms to generate corresponding search results and also process information associated with the multiuser surface identifier.
Abstract:
Architecture for implementing a perceptual user interface. The architecture comprises alternative modalities for controlling computer application programs and manipulating on-screen objects through hand gestures or a combination of hand gestures and verbal commands. The perceptual user interface system includes a tracking component that detects object characteristics of at least one of a plurality of objects within a scene, and tracks the respective object. Detection of object characteristics is based at least in part upon image comparison of a plurality of images relative to a course mapping of the images. A seeding component iteratively seeds the tracking component with object hypotheses based upon the presence of the object characteristics and the image comparison. A filtering component selectively removes the tracked object from the object hypotheses and/or at least one object hypothesis from the set of object hypotheses based upon predetermined removal criteria.