Abstract:
Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is enabled through various user interface (UI) features. In an example embodiment, a curved display is monitored to detect a touch input. If a touch input is detected based on the act of monitoring, then one or more locations of the touch input are determined. Responsive to the determined one or more locations of the touch input, at least one user UI feature is implemented. Example UI features include an orb-like invocation gesture feature, a rotation-based dragging feature, a send-to-dark-side interaction feature, and an object representation and manipulation by proxy representation feature.
Abstract:
The present invention is a system that allows a number of 3D volumetric display or output configurations, such as dome, cubical and cylindrical volumetric displays, to interact with a number of different input configurations, such as a three-dimensional position sensing system having a volume sensing field, a planar position sensing system having a digitizing tablet, and a non-planar position sensing system having a sensing grid formed on a dome. The user interacts via the input configurations, such as by moving a digitizing stylus on the sensing grid formed on the dome enclosure surface. This interaction affects the content of the volumetric display by mapping positions and corresponding vectors of the stylus to a moving cursor within the 3D display space of the volumetric display that is offset from a tip of the stylus along the vector.
Abstract:
A machine learning model is trained by instructing a user to perform proscribed gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model.
Abstract:
A range of unified software authoring tools for creating a talking paper application for integration in an end user platform are described herein. The authoring tools are easy to use and are interoperable to provide an easy and cost-effective method of creating a talking paper application. The authoring tools provide a framework for creating audio content and image content and interactively linking the audio content and the image content. The authoring tools also provide for verifying the interactively linked audio and image content, reviewing the audio content, the image content and the interactive linking on a display device. Finally, the authoring tools provide for saving the audio content, the video content and the interactive linking for publication to a manufacturer for integration in an end user platform or talking paper platform.
Abstract:
The present invention is a system that creates a volumetric display and a user controllable volumetric pointer within the volumetric display. The user can point by aiming a beam which is vector, planar or tangent based, positioning a device in three-dimensions in association with the display, touching a digitizing surface of the display enclosure or otherwise inputting position coordinates. The cursor can take a number of different forms including a ray, a point, a volume and a plane. The ray can include a ring, a bead, a segmented wand, a cone and a cylinder. The user designates an input position and the system maps the input position to a 3D cursor position within the volumetric display. The system also determines whether any object has been designated by the cursor by determining whether the object is within a region of influence of the cursor. The system also performs any function activated in association with the designation.
Abstract:
Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is facilitated by preserving a predetermined orientation for objects. In an example embodiment, a curved display is monitored to detect a touch input on an object. If a touch input on an object is detected based on the monitoring, then one or more locations of the touch input are determined. The object may be manipulated responsive to the determined one or more locations of the touch input. While manipulation of the object is permitted, a predetermined orientation is preserved.
Abstract:
A method, system and computer program for organizing and visualizing display objects within a virtual environment is provided. In one aspect, attributes of display objects define the interaction between display objects according to pre-determined rules, including rules simulating real world mechanics, thereby enabling enriched user interaction. The present invention further provides for the use of piles as an organizational entity for desktop objects. The present invention further provides for fluid interaction techniques for committing actions on display objects in a virtual interface. A number of other interaction and visualization techniques are disclosed.
Abstract:
A method maps positions of a direct input device to locations of a pointer displayed on a surface. The method performs absolute mapping between physical positions of a direct input device and virtual locations of a pointer on a display device when operating in an absolute mapping mode, and relative mapping between the physical positions of the input device and the locations the pointer when operating in a relative mapping mode. Switching between the absolute mapping and the relative mappings is in response to control signals detected from the direct input device.
Abstract:
A method, computer system and computer program is provided for using a suggestive modeling interface. The method consists of a method of a computer-implemented rendering of sketches, the method comprising the steps of: (1) a user activating a sketching application; (2) in response, the sketching application displaying on a screen a suggestive modeling interface; (3) the sketching application importing a sketch to the suggestive modeling interface; and (4) the sketching application retrieving from a database one or more suggestions based on the sketch. The method is operable to allow a user interactively using the sketching application to create a drawing that is guided by the imported sketch by selectively using one or more image guided drawing tools provided by the sketching application. The present invention is well-suited for three-dimensional modeling applications.