Abstract:
The detection of touch on an optical touch-sensitive device is disclosed. For example, one disclosed embodiment comprises a touch-sensitive device including a display screen, a laser, and a scanning mirror configured to scan light from the laser across the screen. The touch-sensitive device also includes a position-sensitive device and optics configured to form an image of at least a portion of the screen on the position-sensitive device. A location of an object relative to the screen may be determined by detecting a location on the position-sensitive device of laser light reflected by the object.
Abstract:
Output of a computer system is manipulated using a physical object disposed adjacent to an interactive display surface. A painting application produces an image in response to an object disposed adjacent to the interactive display surface. During each of a plurality of capture intervals, a set of points corresponding to the object is detected when the object is disposed adjacent to the interactive display surface. An image is projected onto the interactive display surface representing the set of points and is filled with a color or pattern. As successive sets of points are accumulated during each of a plurality of capture intervals, a composite image is displayed. An object can thus be used, for example, to “draw,” “paint,” or “stamp” images on the display surface. These images manifest characteristics of the object and its interaction and movement relative to the interactive display surface in a realistic manner.
Abstract:
A position of a three-dimensional (3D) object relative to a display surface of an interactive display system is detected based upon the intensity of infrared (IR) light reflected from the object and received by an IR video camera disposed under the display surface. As the object approaches the display surface, a “hover” connected component is defined by pixels in the image produced by the IR video camera that have an intensity greater than a predefined hover threshold and are immediately adjacent to another pixel also having an intensity greater than the hover threshold. When the object contacts the display surface, a “touch” connected component is defined by pixels in the image having an intensity greater than a touch threshold, which is greater than the hover threshold. Connected components determined for an object at different heights above the surface are associated with a common label if their bounding areas overlap.
Abstract:
A new mobile electronic device, referred to as a soap, may be used to control electronic devices, external or internal to the soap, in an intuitive, convenient, and comfortable manner. For example, a soap may serve as an alternative to input devices such as a mouse. A soap device may include a core, and a hull that at least partially encloses the core. The core includes a tracking component capable of tracking movement relative to the hull. The soap input device also includes a transmission component configured to transmit a signal from the tracking component to a computing device, where it may control the position of a pointer and the use of a selector on a monitor. The hull may be soft and flexible, the core may be freely rotatable about at least one axis. The core has a shape such that tangentially applied pressure rotates the core relative to the hull. A user may therefore control an electronic device, simply by rolling and manipulating the soap.
Abstract:
A mobile device is provided that includes a position determination mechanism, and a data store of locations including positions for each location. The mobile device is configured to determine its own position and, based on the position of the mobile device, which location is preferred. Upon that determination, the mobile device is configured to orient a pointer in the direction of the preferred location such that a user can move in the direction of the pointer and ultimately arrive at the preferred location.
Abstract:
The subject application relates to a system(s) and/or methodology that facilitate vision-based projection of any image (still or moving) onto any surface. In particular, a front-projected computer vision-based interactive surface system is provided which uses a new commercially available projection technology to obtain a compact, self-contained form factor. The subject configuration addresses installation, calibration, and portability issues that are primary concerns in most vision-based table systems.
Abstract:
Described is using a combination of which a multi-view display is provided by a combining spatial multiplexing (e.g., using a parallax barrier or lenslet), and temporal multiplexing (e.g., using a directed backlight). A scheduling algorithm generates different views by determining which light sources are illuminated at a particular time. Via the temporal multiplexing, different views may be in the same spatial viewing angle (spatial zone). Two of the views may correspond to two eyes of a person, with different video data sent to each eye to provide an autostereoscopic display for that person. Eye (head) tracking may be used to move the view or views with a person as that person moves.
Abstract:
Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is facilitated by preserving a predetermined orientation for objects. In an example embodiment, a curved display is monitored to detect a touch input on an object. If a touch input on an object is detected based on the monitoring, then one or more locations of the touch input are determined. The object may be manipulated responsive to the determined one or more locations of the touch input. While manipulation of the object is permitted, a predetermined orientation is preserved.
Abstract:
The claimed subject matter provides a system and/or a method for simulating grasping of a virtual object. Virtual 3D objects receive simulated user input forces via a 2D input surface adjacent to them. An exemplary method comprises receiving a user input corresponding to a grasping gesture that includes at least two simulated contacts with the virtual object. The grasping gesture is modeled as a simulation of frictional forces on the virtual object. A simulated physical effect on the virtual object by the frictional forces is determined. At least one microprocessor is used to display a visual image of the virtual object moving according to the simulated physical effect.
Abstract:
Virtual controllers for visual displays are described. In one implementation, a camera captures an image of hands against a background. The image is segmented into hand areas and background areas. Various hand and finger gestures isolate parts of the background into independent areas, which are then assigned control parameters for manipulating the visual display. Multiple control parameters can be associated with attributes of multiple independent areas formed by two hands, for advanced control including simultaneous functions of clicking, selecting, executing, horizontal movement, vertical movement, scrolling, dragging, rotational movement, zooming, maximizing, minimizing, executing file functions, and executing menu choices.