Abstract:
Disclosed is a unique system and method that facilitates cursor control based in part on computer vision activated by a capacitive touch sensor. When turned on, user hand gestures or movements can be tracked by a monitoring component and those movements can be converted in real-time to control or drive cursor movements and/or position on a user interface. The system comprises a monitoring component or camera that can be activated by touch or pressure applied to a capacitive touch sensor. A circuit within the sensor determines when the user is touching a button (e.g., on keyboard or mouse) that activates the monitoring component and cursor control mechanism. Thus, intentional hand movements by the user can readily be determined.
Abstract:
A system to position an element on a visual display is provided. The disclosed system comprises a touch detection module that detects a touch upon a touch-sensitive surface of a visual display. Also included is a position module that receives input from the touch detection module to derive a position of a touch. Further, an offset module derives an offset for an element of a user interface. Methods of using this system are also provided.
Abstract:
The subject application relates to a system(s) and/or methodology that facilitate vision-based projection of any image (still or moving) onto any surface. In particular, a front-projected computer vision-based interactive surface system is provided which uses a new commercially available projection technology to obtain a compact, self-contained form factor. The subject configuration addresses installation, calibration, and portability issues that are primary concerns in most vision-based table systems.
Abstract:
A system is described herein which receives internal-assessed (IA) movement information from a mobile device. The system also receives external-assessed (EA) movement information from at least one monitoring system which captures a scene containing the mobile device. The system then compares the IA movement information with the EA movement information with respect to each candidate object in the scene. If the IA movement information matches the EA movement information for a particular candidate object, the system concludes that the candidate object is associated with the mobile device. For example, the object may correspond to a hand that holds the mobile device. The system can use the correlation results produced in the above-indicated manner to perform various environment-specific actions.
Abstract:
Different techniques of processing user interactions with a computing system are described. In one implementation, an interactive display is configured to depict a graphical user interface which includes a plurality of different types of user interface elements (e.g., button-type element, scroll bar-type element). A user may use one or more user input object (e.g., finger, hand, stylus) to simultaneously interact with the interactive display. A plurality of different user input processing methods are used to process user inputs received by the graphical user interface differently and in accordance with the types of the user interface elements which are displayed. The processing of the user inputs is implemented to determine whether the user inputs control the respective user interface elements. The processing may determine whether the user inputs activate and/or manipulate the displayed user interface elements in but one example.
Abstract:
An interaction system is described which uses a depth camera to capture a depth image of a physical object placed on, or in vicinity to, an interactive surface. The interaction system also uses a video camera to capture a video image of the physical object. The interaction system can then generate a 3D virtual object based on the depth image and video image. The interaction system then uses a 3D projector to project the 3D virtual object back onto the interactive surface, e.g., in a mirrored relationship to the physical object. A user may then capture and manipulate the 3D virtual object in any manner. Further, the user may construct a composite model based on smaller component 3D virtual objects. The interaction system uses a projective texturing technique to present a realistic-looking 3D virtual object on a surface having any geometry.
Abstract:
The subject disclosure is directed towards detecting symbolic activity within a given environment using a context-dependent grammar. In response to receiving sets of input data corresponding to one or more input modalities, a context-aware interactive system processes a model associated with interpreting the symbolic activity using context data for the given environment. Based on the model, related sets of input data are determined. The context-aware interactive system uses the input data to interpret user intent with respect to the input and thereby, identify one or more commands for a target output mechanism.
Abstract:
A system and method are disclosed for providing a touch interface for electronic devices. The touch interface can be any surface. As one example, a table top can be used as a touch sensitive interface. In one embodiment, the system determines a touch region of the surface, and correlates that touch region to a display of an electronic device for which input is provided. The system may have a 3D camera that identifies the relative position of a user's hands to the touch region to allow for user input. Note that the user's hands do not occlude the display. The system may render a representation of the user's hand on the display in order for the user to interact with elements on the display screen.
Abstract:
A 3-D imaging system for recognition and interpretation of gestures to control a computer. The system includes a 3-D imaging system that performs gesture recognition and interpretation based on a previous mapping of a plurality of hand poses and orientations to user commands for a given user. When the user is identified to the system, the imaging system images gestures presented by the user, performs a lookup for the user command associated with the captured image(s), and executes the user command(s) to effect control of the computer, programs, and connected devices.
Abstract:
Effects of undesired infrared light are reduced in an imaging system using an infrared light source. The desired infrared light source is activated and a first set of imaging data is captured during a first image capture interval. The desired infrared light source is then deactivated, and a second set of image data is captured during a second image capture interval. A composite set of image data is then generated by subtracting from first values in the first set of image data corresponding second values in the second set of image data. The composite set of image data thus includes a set of imaging where data all infrared signals are collected, including both signals resulting from the IR source and other IR signals, from which is subtracted imaging in which no signals result from the IR course, leaving image data including signals resulting only from the IR source.