Abstract:
Described herein is an apparatus that includes a curved display surface that has an interior and an exterior. The curved display surface is configured to display images thereon. The apparatus also includes an emitter that emits light through the interior of the curved display surface. A detector component analyzes light reflected from the curved display surface to detect a position on the curved display surface where a first member is in physical contact with the exterior of the curved display surface.
Abstract:
Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is facilitated by preserving a predetermined orientation for objects. In an example embodiment, a curved display is monitored to detect a touch input on an object. If a touch input on an object is detected based on the monitoring, then one or more locations of the touch input are determined. The object may be manipulated responsive to the determined one or more locations of the touch input. While manipulation of the object is permitted, a predetermined orientation is preserved.
Abstract:
A range of unified software authoring tools for creating a talking paper application for integration in an end user platform are described herein. The authoring tools are easy to use and are interoperable to provide an easy and cost-effective method of creating a talking paper application. The authoring tools provide a framework for creating audio content and image content and interactively linking the audio content and the image content. The authoring tools also provide for verifying the interactively linked audio and image content, reviewing the audio content, the image content and the interactive linking on a display device. Finally, the authoring tools provide for saving the audio content, the video content and the interactive linking for publication to a manufacturer for integration in an end user platform or talking paper platform.
Abstract:
A range of unified software authoring tools for creating a talking paper application for integration in an end user platform are described herein. The authoring tools are easy to use and are interoperable to provide an easy and cost-effective method of creating a talking paper application. The authoring tools provide a framework for creating audio content and image content and interactively linking the audio content and the image content. The authoring tools also provide for verifying the interactively linked audio and image content, reviewing the audio content, the image content and the interactive linking on a display device. Finally, the authoring tools provide for saving the audio content, the video content and the interactive linking for publication to a manufacturer for integration in an end user platform or talking paper platform.
Abstract:
This document relates to multiple mouse character entry. More particularly, the document relates to multiple mouse character entry tools for use on a common or shared graphical user interface (GUI). In some implementations, the multiple mouse character entry tools (MMCE tools) can generate a GUI that includes multiple distinctively identified cursors. Individual cursors can be controlled by individual users via a corresponding mouse. The MMCE tools can associate a set of characters with an individual cursor effective that an individual user can use the mouse's scroll wheel to scroll to specific characters of the set. The user can select an individual character by clicking a button of the mouse.
Abstract:
The invention provides a method to stabilize a pointer displayed within an output image. The method enables the user to magnify selected areas within the output image. This allows the user to ‘zoom’ in on areas of interest in the output image, and to make accurate selections with the stabilized pointer. Design features of the method enable pixel and sub-pixel accurate pointing, which is not possible with most conventional direct pointing devices. The invention can be worked for 2D and 3D pointers.
Abstract:
The present invention is a system that manages a volumetric display using volume windows. The volume windows have the typical functions, such as minimize, resize, etc., which operate in a volume. When initiated by an application a volume window is assigned to the application in a volume window data structure. Application data produced by the application is assigned to the windows responsive to which applications are assigned to which windows in the volume window data structure. Input events are assigned to the windows responsive to whether they are spatial or non-spatial. Spatial events are assigned to the window surrounding the event or cursor where a policy resolves situations where more than one window surrounds the cursor. Non-spatial events are assigned to the active or working window.
Abstract:
The present invention is a system that allows a number of 3D volumetric display or output configurations, such as dome, cubical and cylindrical volumetric displays, to interact with a number of different input configurations, such as a three-dimensional position sensing system having a volume sensing field, a planar position sensing system having a digitizing tablet, and a non-planar position sensing system having a sensing grid formed on a dome. The user interacts via the input configurations, such as by moving a digitizing stylus on the sensing grid formed on the dome enclosure surface. This interaction affects the content of the volumetric display by mapping positions and corresponding vectors of the stylus to a moving cursor within the 3D display space of the volumetric display that is offset from a tip of the stylus along the vector.
Abstract:
A system that provides a bimanual user interface in which an input device is provided for each of the users hands, a left hand (LH) device and a right hand (RH) device. The input devices are used in conjunction with a large format, upright, human scale display at which the user can stand and upon which the input devices are moved. The positions of the input devices on the display are marked by displayed cursors. The system detects the position of the input devices relative to the display and draws a vector corresponding to unfastened tape between positions of cursors of the corresponding input devices and pointing from the LH device to the RH device. By changing the state of the LH input device the unfastened tape can be fastened or pinned along the vector as the user moves the LH device toward the RH device. By changing the state of the RH device, the tape can be unfastened by moving the LH device away from the RH device. Straight lines are drawn by holding the RH fixed while the LH pins the tape. Curves are drawn by moving the RH device while the LH device pins the tape. The switch between straight and curved lines occurs without an explicit mode switch simply by keeping the RH device fixed or moving it. The radius of the curvature of curved lines corresponds to the separation between the LH and RH devices.
Abstract:
An input system for controlling the position or motion of a cursor, three dimensions that uses x, z position for inputting two coordinates and tilt in a plane (x-y or z-y) to input a third (and possibly a fourth coordinate). The invention is moved about on a surface for inputting two of the dimensions and tilted to input the third. The amount or degree of tilt and the direction of tilt controls the input of the third dimension. The base of the hand held device is curved so that the device can be tilted even while it is moved in two dimensions along the surface of the tablet. Tilting can be along two orthogonal axes allowing the device to input four coordinates if desired. The coil can also have switched resistors controlled by mouse buttons connected to it which the tablet can sense being activated to allow clutching and selection operations like those of a conventional mouse.