Abstract:
A 3-D imaging system for recognition and interpretation of gestures to control a computer. The system includes a 3-D imaging system that performs gesture recognition and interpretation based on a previous mapping of a plurality of hand poses and orientations to user commands for a given user. When the user is identified to the system, the imaging system images gestures presented by the user, performs a lookup for the user command associated with the captured image(s), and executes the user command(s) to effect control of the computer, programs, and connected devices.
Abstract:
An interaction management module (IMM) is described for allowing users to engage an interactive surface in a collaborative environment using various input devices, such as keyboard-type devices and mouse-type devices. The IMM displays digital objects on the interactive surface that are associated with the devices in various ways. The digital objects can include input display interfaces, cursors, soft-key input mechanisms, and so on. Further, the IMM provides a mechanism for establishing a frame of reference for governing the placement of each cursor on the interactive surface. Further, the IMM provides a mechanism for allowing users to make a digital copy of a physical article placed on the interactive surface. The IMM also provides a mechanism which duplicates actions taken on the digital copy with respect to the physical article, and vice versa.
Abstract:
The claimed subject matter provides a system and/or a method that facilitates detecting and identifying objects within surface computing. An interface component can receive at least one surface input, the surface input relates to at least one of an object, a gesture, or a user. A surface detection component can detect a location of the surface input utilizing a computer vision-based sensing technique. A Radio Frequency Identification (RFID) tag can transmit a portion of RFID data, wherein the RFID tag is associated with the surface input. A Radio Frequency Identification (RFID) fusion component can utilize the portion of RFID data to identify at least one of a source of the surface input or a portion of data to associate to the surface input.
Abstract:
In an interactive display system, a projected image on a display surface and a vision system used to detect objects touching the display surface are aligned, and optical distortion of the vision system is compensated. Also, calibration procedures correct for a non-uniformity of infrared (IR) illumination of the display surface by IR light sources and establish a touch threshold for one or more uses so that the interactive display system correctly responds to each user touching the display surface. A movable IR camera filter enables automation of the alignment of the projected image and the image of the display surface and help in detecting problems in either the projector or vision system.
Abstract:
An interactive table has a display surface on which a physical object is disposed. A camera within the interactive table responds to infrared (IR) light reflected from the physical object enabling a location of the physical object on the display surface to be determined, so that the physical object appear part of a virtual environment displayed thereon. The physical object can be passive or active. An active object performs an active function, e.g., it can be self-propelled to move about on the display surface, or emit light or sound, or vibrate. The active object can be controlled by a user or the processor. The interactive table can project an image through a physical object on the display surface so the image appears part of the object. A virtual entity is preferably displayed at a position (and a size) to avoid visually interference with any physical object on the display surface.
Abstract:
An application state of a computer program is stored and associated with a physical object and can be subsequently retrieved when the physical object is detected adjacent to an interactive display surface. An identifying characteristic presented by the physical object, such as a reflective pattern applied to the object, is detected when the physical object is positioned on the interactive display surface. The user or the system can initiate a save of the application state. For example, the state of an electronic game using the interactive display surface can be saved. Attributes representative of the state are stored and associated with the identifying characteristic of the physical object. When the physical object is again placed on the interactive display surface, the physical object is detected based on its identifying characteristic, and the attributes representative of the state can be selectively retrieved and used to recreate the state of the application.
Abstract:
Virtual controllers for visual displays are described. In one implementation, a camera captures an image of hands against a background. The image is segmented into hand areas and background areas. Various hand and finger gestures isolate parts of the background into independent areas, which are then assigned control parameters for manipulating the visual display. Multiple control parameters can be associated with attributes of multiple independent areas formed by two hands, for advanced control including simultaneous functions of clicking, selecting, executing, horizontal movement, vertical movement, scrolling, dragging, rotational movement, zooming, maximizing, minimizing, executing file functions, and executing menu choices.
Abstract:
A computer-implemented method for utilizing a camera device to track an object is presented. As part of the method, a region of interest is determined within an overall image sensing area. A point light source is then tracked within the region of interest. In a particular arrangement, the camera device incorporates CMOS image sensor technology and the point light source is an IR LED. Other embodiments pertain to manipulations of the region of interest to accommodate changes to the status of the point light source.