Abstract:
The subject application relates to a system(s) and/or methodology that facilitate vision-based projection of any image (still or moving) onto any surface. In particular, a front-projected computer vision-based interactive surface system is provided which uses a new commercially available projection technology to obtain a compact, self-contained form factor. The subject configuration addresses installation, calibration, and portability issues that are primary concerns in most vision-based table systems. The subject application also relates to determining whether an object is touching or hovering over an interactive surface based on an analysis of a shadow image.
Abstract:
A system and method are disclosed for providing a touch interface for electronic devices. The touch interface can be any surface. As one example, a table top can be used as a touch sensitive interface. In one embodiment, the system determines a touch region of the surface, and correlates that touch region to a display of an electronic device for which input is provided. The system may have a 3D camera that identifies the relative position of a user's hands to the touch region to allow for user input. Note that the user's hands do not occlude the display. The system may render a representation of the user's hand on the display in order for the user to interact with elements on the display screen.
Abstract:
A dynamic projected user interface device is disclosed, that includes a projector, a projection controller, and an imaging sensor. The projection controller is configured to receive instructions from a computing device, and to provide display images via the projector onto display surfaces. The display images are indicative of a first set of input controls when the computing device is in a first operating context, and a second set of input controls when the computing device is in a second operating context. The imaging sensor is configured to optically detect physical contacts with the one or more display surfaces.
Abstract:
The subject disclosure is directed towards detecting symbolic activity within a given environment using a context-dependent grammar. In response to receiving sets of input data corresponding to one or more input modalities, a context-aware interactive system processes a model associated with interpreting the symbolic activity using context data for the given environment. Based on the model, related sets of input data are determined. The context-aware interactive system uses the input data to interpret user intent with respect to the input and thereby, identify one or more commands for a target output mechanism.
Abstract:
A system is described herein which receives internal-assessed (IA) movement information from a mobile device. The system also receives external-assessed (EA) movement information from at least one monitoring system which captures a scene containing the mobile device. The system then compares the IA movement information with the EA movement information with respect to each candidate object in the scene. If the IA movement information matches the EA movement information for a particular candidate object, the system concludes that the candidate object is associated with the mobile device. For example, the object may correspond to a hand that holds the mobile device. The system can use the correlation results produced in the above-indicated manner to perform various environment-specific actions.
Abstract:
A mobile device connection system is provided. The system includes an input medium to detect a device position or location. An analysis component determines a device type and establishes a connection with the device. The input medium can include vision systems to detect device presence and location where connections are established via wireless technologies.
Abstract:
A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc.
Abstract:
Virtual controllers for visual displays are described. In one implementation, a camera captures an image of hands against a background. The image is segmented into hand areas and background areas. Various hand and finger gestures isolate parts of the background into independent areas, which are then assigned control parameters for manipulating the visual display. Multiple control parameters can be associated with attributes of multiple independent areas formed by two hands, for advanced control including simultaneous functions of clicking, selecting, executing, horizontal movement, vertical movement, scrolling, dragging, rotational movement, zooming, maximizing, minimizing, executing file functions, and executing menu choices.
Abstract:
A unique system and method is provided that facilitates pixel-accurate targeting with respect to multi-touch sensitive displays when selecting or viewing content with a cursor. In particular, the system and method can track dual inputs from a primary finger and a secondary finger, for example. The primary finger can control movement of the cursor while the secondary finger can adjust a control-display ratio of the screen. As a result, cursor steering and selection of an assistance mode can be performed at about the same time or concurrently. In addition, the system and method can stabilize a cursor position at a top middle point of a user's finger in order to mitigate clicking errors when making a selection.
Abstract:
A dynamic projected user interface device is disclosed, that includes a projector, a projection controller, and an imaging sensor. The projection controller is configured to receive instructions from a computing device, and to provide display images via the projector onto display surfaces. The display images are indicative of a first set of input controls when the computing device is in a first operating context, and a second set of input controls when the computing device is in a second operating context. The imaging sensor is configured to optically detect physical contacts with the one or more display surfaces.