Abstract:
Disclosed is a unique system and method that facilitates gesture-based interaction with a user interface. The system involves an object sensing configured to include a sensing plane vertically or horizontally located between at least two imaging components on one side and a user on the other. The imaging components can acquire input images taken of a view of and through the sensing plane. The images can include objects which are on the sensing plane and/or in the background scene as well as the user as he interacts with the sensing plane. By processing the input images, one output image can be returned which shows the user objects that are in contact with the plane. Thus, objects located at a particular depth can be readily determined. Any other objects located beyond can be “removed” and not seen in the output image.
Abstract:
An interaction management module (IMM) is described for allowing users to engage an interactive surface in a collaborative environment using various input devices, such as keyboard-type devices and mouse-type devices. The IMM displays digital objects on the interactive surface that are associated with the devices in various ways. The digital objects can include input display interfaces, cursors, soft-key input mechanisms, and so on. Further, the IMM provides a mechanism for establishing a frame of reference for governing the placement of each cursor on the interactive surface. Further, the IMM provides a mechanism for allowing users to make a digital copy of a physical article placed on the interactive surface. The IMM also provides a mechanism which duplicates actions taken on the digital copy with respect to the physical article, and vice versa.
Abstract:
An interaction system is described which uses a depth camera to capture a depth image of a physical object placed on, or in vicinity to, an interactive surface. The interaction system also uses a video camera to capture a video image of the physical object. The interaction system can then generate a 3D virtual object based on the depth image and video image. The interaction system then uses a 3D projector to project the 3D virtual object back onto the interactive surface, e.g., in a mirrored relationship to the physical object. A user may then capture and manipulate the 3D virtual object in any manner. Further, the user may construct a composite model based on smaller component 3D virtual objects. The interaction system uses a projective texturing technique to present a realistic-looking 3D virtual object on a surface having any geometry.
Abstract:
Virtual controllers for visual displays are described. In one implementation, a camera captures an image of hands against a background. The image is segmented into hand areas and background areas. Various hand and finger gestures isolate parts of the background into independent areas, which are then assigned control parameters for manipulating the visual display. Multiple control parameters can be associated with attributes of multiple independent areas formed by two hands, for advanced control including simultaneous functions of clicking, selecting, executing, horizontal movement, vertical movement, scrolling, dragging, rotational movement, zooming, maximizing, minimizing, executing file functions, and executing menu choices.
Abstract:
Different techniques of processing user interactions with a computing system are described. In one implementation, an interactive display is configured to depict a graphical user interface which includes a plurality of different types of user interface elements (e.g., button-type element, scroll bar-type element). A user may use one or more user input object (e.g., finger, hand, stylus) to simultaneously interact with the interactive display. A plurality of different user input processing methods are used to process user inputs received by the graphical user interface differently and in accordance with the types of the user interface elements which are displayed. The processing of the user inputs is implemented to determine whether the user inputs control the respective user interface elements. The processing may determine whether the user inputs activate and/or manipulate the displayed user interface elements in but one example.
Abstract:
A dynamic projected user interface device is disclosed, that includes a projector, a projection controller, and an imaging sensor. The projection controller is configured to receive instructions from a computing device, and to provide display images via the projector onto display surfaces. The display images are indicative of a first set of input controls when the computing device is in a first operating context, and a second set of input controls when the computing device is in a second operating context. The imaging sensor is configured to optically detect physical contacts with the one or more display surfaces.
Abstract:
A unique system and method that facilitates extending input/output capabilities for resource deficient mobile devices and interactions between multiple heterogeneous devices is provided. The system and method involve an interactive surface to which the desired mobile devices can be connected. The interactive surface can provide an enhanced display space and customization controls for mobile devices that lack adequate displays and input capabilities. In addition, the interactive surface can be employed to permit communication and interaction between multiple mobile devices that otherwise are unable to interact with each other. When connected to the interactive surface, the mobile devices can share information, view information from their respective devices, and store information to the interactive surface. Furthermore, the interactive surface can resume activity states of mobile devices that were previously communicating upon re-connection to the surface.
Abstract:
An interactive table has a display surface on which a physical object is disposed. A camera within the interactive table responds to infrared (IR) light reflected from the physical object enabling a location of the physical object on the display surface to be determined, so that the physical object appear part of a virtual environment displayed thereon. The physical object can be passive or active. An active object performs an active function, e.g., it can be self-propelled to move about on the display surface, or emit light or sound, or vibrate. The active object can be controlled by a user or the processor. The interactive table can project an image through a physical object on the display surface so the image appears part of the object. A virtual entity is preferably displayed at a position (and a size) to avoid visually interference with any physical object on the display surface.
Abstract:
A mobile device is provided that includes a position determination mechanism, and a data store of locations including positions for each location. The mobile device is configured to determine its own position and, based on the position of the mobile device, which location is preferred. Upon that determination, the mobile device is configured to orient a pointer in the direction of the preferred location such that a user can move in the direction of the pointer and ultimately arrive at the preferred location.
Abstract:
Architecture for implementing a perceptual user interface. The architecture comprises alternative modalities for controlling computer application programs and manipulating on-screen objects through hand gestures or a combination of hand gestures and verbal commands. The perceptual user interface system includes a tracking component that detects object characteristics of at least one of a plurality of objects within a scene, and tracks the respective object. Detection of object characteristics is based at least in part upon image comparison of a plurality of images relative to a course mapping of the images. A seeding component iteratively seeds the tracking component with object hypotheses based upon the presence of the object characteristics and the image comparison. A filtering component selectively removes the tracked object from the object hypotheses and/or at least one object hypothesis from the set of object hypotheses based upon predetermined removal criteria.