Abstract:
Motion of one or more physical objects relative to a display surface of a display system is detected and an optical flow determined from the motion is used to manipulate a graphical object presented on the display surface. The one or more physical objects are detected in response to light reflected from the object(s) and received by a video camera. The optical flow is determined from the video camera image by identifying flow vectors for points in one or more patches included in the image that correspond to the physical objects. A proximity of a physical object to the display surface can be determined based on an intensity of light reflected from the physical object(s), or using a touch sensor such as a capacitance, pressure, or electromagnetic sensor or the like. Based on the optical flow, the graphical object can be translated, rotated, and/or scaled in size.
Abstract:
The claimed subject matter provides a system and/or a method that facilitates enhancing a game, game play or playability of a game. An experience component can collect a portion of data related to a game in which the portion of data indicates at least one of a tip or a tactic for the game. A game component can dynamically incorporate the portion of data into the game during game play to enhance playability of such game for a user with assistance provided by at least one of the tip or the tactic.
Abstract:
A light pointer is selectively activated to direct a light beam onto an interactive display surface, forming a pattern of light that is detected by a light sensor disposed within an interactive display table. The waveband of the light produced by the light pointer is selected to correspond to a waveband to which the light sensor responds, enabling the light sensor to detect the position of the pattern on the interactive display surface, as well as characteristics that enable the location and orientation of the light pointer to be determined. Specifically, the shape and size of the pattern, and the intensity of light forming the pattern are detected by the light sensor and are processed to determine the orientation of the light pointer and its distance from the interactive display surface. The pattern may comprise various shapes, such as circles, arrows, and crosshairs.
Abstract:
Effects of undesired infrared light are reduced in an imaging system using an infrared light source. The desired infrared light source is activated and a first set of imaging data is captured during a first image capture interval. The desired infrared light source is then deactivated, and a second set of image data is captured during a second image capture interval. A composite set of image data is then generated by subtracting from first values in the first set of image data corresponding second values in the second set of image data. The composite set of image data thus includes a set of imaging where data all infrared signals are collected, including both signals resulting from the IR source and other IR signals, from which is subtracted imaging in which no signals result from the IR course, leaving image data including signals resulting only from the IR source.
Abstract:
A dynamic projected user interface device is disclosed, that includes a projector, a projection controller, and an imaging sensor. The projection controller is configured to receive instructions from a computing device, and to provide display images via the projector onto display surfaces. The display images are indicative of a first set of input controls when the computing device is in a first operating context, and a second set of input controls when the computing device is in a second operating context. The imaging sensor is configured to optically detect physical contacts with the one or more display surfaces.
Abstract:
Techniques and technologies are provided which can allow for touch input with a touch screen device. In response to an attempt to select a target displayed on a screen, a callout can be rendered in a non-occluded area of the screen. The callout includes a representation of the area of the screen that is occluded by a selection entity when the attempt to select the target is made.
Abstract:
Compensation of the effects of uncontrolled light in an imaging system using a controlled light source. Light from the controlled light source reflected by an object and uncontrolled light are detected in a plurality of frequency ranges. Intensity of the uncontrolled light is determined based on the varying sensitivity of an image sensor to light in the different frequency ranges and known emission characteristics of the controlled light source in the frequency ranges. Once the intensity of the uncontrolled light is determined, the total light detected at each point is adjusted to reduce the effects of the uncontrolled light in the resulting imaging data produced by the imaging system.
Abstract:
Described is using a combination of which a multi-view display is provided by a combining spatial multiplexing (e.g., using a parallax barrier or lenslet), and temporal multiplexing (e.g., using a directed backlight). A scheduling algorithm generates different views by determining which light sources are illuminated at a particular time. Via the temporal multiplexing, different views may be in the same spatial viewing angle (spatial zone). Two of the views may correspond to two eyes of a person, with different video data sent to each eye to provide an autostereoscopic display for that person. Eye (head) tracking may be used to move the view or views with a person as that person moves.
Abstract:
Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is facilitated by preserving a predetermined orientation for objects. In an example embodiment, a curved display is monitored to detect a touch input on an object. If a touch input on an object is detected based on the monitoring, then one or more locations of the touch input are determined. The object may be manipulated responsive to the determined one or more locations of the touch input. While manipulation of the object is permitted, a predetermined orientation is preserved.
Abstract:
The claimed subject matter provides a system and/or a method for simulating grasping of a virtual object. Virtual 3D objects receive simulated user input forces via a 2D input surface adjacent to them. An exemplary method comprises receiving a user input corresponding to a grasping gesture that includes at least two simulated contacts with the virtual object. The grasping gesture is modeled as a simulation of frictional forces on the virtual object. A simulated physical effect on the virtual object by the frictional forces is determined. At least one microprocessor is used to display a visual image of the virtual object moving according to the simulated physical effect.