Abstract:
Disclosed is a unique system and method that facilitates gesture-based interaction with a user interface. The system involves an object sensing configured to include a sensing plane vertically or horizontally located between at least two imaging components on one side and a user on the other. The imaging components can acquire input images taken of a view of and through the sensing plane. The images can include objects which are on the sensing plane and/or in the background scene as well as the user as he interacts with the sensing plane. By processing the input images, one output image can be returned which shows the user objects that are in contact with the plane. Thus, objects located at a particular depth can be readily determined. Any other objects located beyond can be “removed” and not seen in the output image.
Abstract:
A new mobile electronic device, referred to as a soap, may be used to control electronic devices, external or internal to the soap, in an intuitive, convenient, and comfortable manner. For example, a soap may serve as an alternative to input devices such as a mouse. A soap device may include a core, and a hull that at least partially encloses the core. The core includes a tracking component capable of tracking movement relative to the hull. The soap input device also includes a transmission component configured to transmit a signal from the tracking component to a computing device, where it may control the position of a pointer and the use of a selector on a monitor. The hull may be soft and flexible, the core may be freely rotatable about at least one axis. The core has a shape such that tangentially applied pressure rotates the core relative to the hull. A user may therefore control an electronic device, simply by rolling and manipulating the soap.
Abstract:
Virtual controllers for visual displays are described. In one implementation, a camera captures an image of hands against a background. The image is segmented into hand areas and background areas. Various hand and finger gestures isolate parts of the background into independent areas, which are then assigned control parameters for manipulating the visual display. Multiple control parameters can be associated with attributes of multiple independent areas formed by two hands, for advanced control including simultaneous functions of clicking, selecting, executing, horizontal movement, vertical movement, scrolling, dragging, rotational movement, zooming, maximizing, minimizing, executing file functions, and executing menu choices.
Abstract:
In an interactive display system, a projected image on a display surface and a vision system used to detect objects touching the display surface are aligned, and optical distortion of the vision system is compensated. Also, calibration procedures correct for a non-uniformity of infrared (IR) illumination of the display surface by IR light sources and establish a touch threshold for one or more uses so that the interactive display system correctly responds to each user touching the display surface. A movable IR camera filter enables automation of the alignment of the projected image and the image of the display surface and help in detecting problems in either the projector or vision system.
Abstract:
In an interactive display system, a projected image on a display surface and a vision system used to detect objects touching the display surface are aligned, and optical distortion of the vision system is compensated. Also, calibration procedures correct for a non-uniformity of infrared (IR) illumination of the display surface by IR light sources and establish a touch threshold for one or more uses so that the interactive display system correctly responds to each user touching the display surface. A movable IR camera filter enables automation of the alignment of the projected image and the image of the display surface and help in detecting problems in either the projector or vision system.
Abstract:
A mobile device connection system is provided. The system includes an input medium to detect a device position or location. An analysis component determines a device type and establishes a connection with the device. The input medium can include vision systems to detect device presence and location where connections are established via wireless technologies.
Abstract:
An interactive table has a display surface on which a physical object is disposed. A camera within the interactive table responds to infrared (IR) light reflected from the physical object enabling a location of the physical object on the display surface to be determined, so that the physical object appear part of a virtual environment displayed thereon. The physical object can be passive or active. An active object performs an active function, e.g., it can be self-propelled to move about on the display surface, or emit light or sound, or vibrate. The active object can be controlled by a user or the processor. The interactive table can project an image through a physical object on the display surface so the image appears part of the object. A virtual entity is preferably displayed at a position (and a size) to avoid visually interference with any physical object on the display surface.
Abstract:
A coded pattern applied to an object is identified when the object is placed on a display surface of an interactive display. The coded pattern is detected in an image of the display surface produced in response to reflected infrared (IR) light received from the coded pattern by an IR video camera disposed on an opposite side of the display surface from the object. The coded pattern can be either a circular, linear, matrix, variable bit length matrix, multi-level matrix, black/white (binary), or gray scale pattern. The coded pattern serves as an identifier of the object and includes a cue component and a code portion disposed in a predefined location relative to the cue component. A border region encompasses the cue component and the code portion and masks undesired noise that might interfere with decoding the code portion.
Abstract:
A system is described herein which receives internal-assessed (IA) movement information from a mobile device. The system also receives external-assessed (EA) movement information from at least one monitoring system which captures a scene containing the mobile device. The system then compares the IA movement information with the EA movement information with respect to each candidate object in the scene. If the IA movement information matches the EA movement information for a particular candidate object, the system concludes that the candidate object is associated with the mobile device. For example, the object may correspond to a hand that holds the mobile device. The system can use the correlation results produced in the above-indicated manner to perform various environment-specific actions.
Abstract:
Different techniques of processing user interactions with a computing system are described. In one implementation, an interactive display is configured to depict a graphical user interface which includes a plurality of different types of user interface elements (e.g., button-type element, scroll bar-type element). A user may use one or more user input object (e.g., finger, hand, stylus) to simultaneously interact with the interactive display. A plurality of different user input processing methods are used to process user inputs received by the graphical user interface differently and in accordance with the types of the user interface elements which are displayed. The processing of the user inputs is implemented to determine whether the user inputs control the respective user interface elements. The processing may determine whether the user inputs activate and/or manipulate the displayed user interface elements in but one example.