Abstract:
A range of unified software authoring tools for creating a talking paper application for integration in an end user platform are described herein. The authoring tools are easy to use and are interoperable to provide an easy and cost-effective method of creating a talking paper application. The authoring tools provide a framework for creating audio content and image content and interactively linking the audio content and the image content. The authoring tools also provide for verifying the interactively linked audio and image content, reviewing the audio content, the image content and the interactive linking on a display device. Finally, the authoring tools provide for saving the audio content, the video content and the interactive linking for publication to a manufacturer for integration in an end user platform or talking paper platform.
Abstract:
A method maps positions of a direct input device to locations of a pointer displayed on a surface. The method performs absolute mapping between physical positions of a direct input device and virtual locations of a pointer on a display device when operating in an absolute mapping mode, and relative mapping between the physical positions of the input device and the locations the pointer when operating in a relative mapping mode. Switching between the absolute mapping and the relative mappings is in response to control signals detected from the direct input device.
Abstract:
The present invention is directed to a two-handed input control system that dynamically changes an input-to-object mapping for mapping movement of a graphical object on a display of a virtual scene as the viewpoint of the virtual scene changes. As input to the system for changing the position of the graphical object occurs, the mapping is revised to reflect changes in the viewpoint so that the object moves as inherently expected. That is, changes to the viewpoint change the mapping so that a correspondence between the viewpoint and the input space is always maintained. During movement of the object a screen cursor is visually suppressed so that the movement of the graphical object and the screen cursor do not split the attention of the user. The screen cursor is always maintained within the visual display region of the virtual scene even when the object moves out of the visual display region by moving the cursor to a center of the screen when it reaches an edge of the screen.
Abstract:
Described herein is an apparatus that includes a curved display surface that has an interior and an exterior. The curved display surface is configured to display images thereon. The apparatus also includes an emitter that emits light through the interior of the curved display surface. A detector component analyzes light reflected from the curved display surface to detect a position on the curved display surface where a first member is in physical contact with the exterior of the curved display surface.
Abstract:
The present invention is a system that places 2D user interface widgets in optimal positions in a 3D volumetric display where they can be easily used based on the knowledge user have about traditional 2D display systems. The widgets are placed on a shell or outer edge surface of a volumetric display, in a ring around the outside bottom of the display, in a vertical or horizontal plane within the display and/or responsive to the users focus of attention. Virtual 2D widgets are mapped to volumetric voxels of the 3D display system. This mapping includes any mapping between a 2D representation or virtual display map of the widget to the corresponding voxels. For example, a 2D texture map of the widget image may be mapped into voxels. Control actions in the 3D volume initiated by conventional control devices, such as a mouse or a touch sensitive dome enclosure surface, are mapped to controls of the widgets and appropriate control functions are performed.
Abstract:
The present invention is a system that creates a volumetric display and a user controllable volumetric pointer within the volumetric display. The user can point by aiming a beam which is vector, planar or tangent based, positioning a device in three-dimensions in association with the display, touching a digitizing surface of the display enclosure or otherwise inputting position coordinates. The cursor can take a number of different forms including a ray, a point, a volume and a plane. The ray can include a ring, a bead, a segmented wand, a cone and a cylinder. The user designates an input position and the system maps the input position to a 3D cursor position within the volumetric display. The system also determines whether any object has been designated by the cursor by determining whether the object is within a region of influence of the cursor. The system also performs any function activated in association with the designation.
Abstract:
The present invention is a system that creates a volumetric display and a user controllable volumetric pointer within the volumetric display. The user can point by aiming a beam which is vector, planar or tangent based, positioning a device in three-dimensions in association with the display, touching a digitizing surface of the display enclosure or otherwise inputting position coordinates. The cursor can take a number of different forms including a ray, a point, a volume and a plane. The ray can include a ring, a bead, a segmented wand, a cone and a cylinder. The user designates an input position and the system maps the input position to a 3D cursor position within the volumetric display. The system also determines whether any object has been designated by the cursor by determining whether the object is within a region of influence of the cursor. The system also performs any function activated in association with the designation.
Abstract:
The present invention is a system that allows a user to physically rotate a three-dimensional volumetric display enclosure with a corresponding rotation of the display contents. The rotation of the enclosure is sampled with an encoder and the display is virtually rotated by a computer maintaining the scene by an amount corresponding to the physical rotation before being rendered. This allows the user to remain in one position while viewing different parts of the displayed scene corresponding to different viewpoints. The display contents can be rotated in direct correspondence with the display enclosure or with a gain (positive or negative) that accelerates the rotation of the contents with respect to the physical rotation of the enclosure. Any display widgets in the scene, such as a virtual keyboard, can be maintained stationary with respect to the user while scene contents rotate by applying a negative rotational gain to the widgets. The rotation can also be controlled by a time value such that the rotation continues until a specified time is reached or expires.
Abstract:
The present invention is a widget display system for a volumetric or true three-dimensional (3D) display that provides a volumetric or omni-viewable widget that can be viewed and interacted with from any location around the volumetric display. The widget can be viewed from any location by duplicating the widget such that all locations around the display are within the viewing range of the widget. A widget can be provided with multiple viewing surfaces or faces making the widget omni-directional. A widget can be continuously rotated to face all of the possible locations of users over a period of time. User locations can be determined and the widget can be oriented to face the users when selected.
Abstract:
A method, computer system and computer program is provided for using a suggestive modeling interface. The method consists of a method of a computer-implemented rendering of sketches, the method comprising the steps of: (1) a user activating a sketching application; (2) in response, the sketching application displaying on a screen a suggestive modeling interface; (3) the sketching application importing a sketch to the suggestive modeling interface; and (4) the sketching application retrieving from a database one or more suggestions based on the sketch. The method is operable to allow a user interactively using the sketching application to create a drawing that is guided by the imported sketch by selectively using one or more image guided drawing tools provided by the sketching application. The present invention is well-suited for three-dimensional modeling applications.