Abstract:
A method, computer system and computer program is provided for using a suggestive modeling interface. The method consists of a method of a computer-implemented rendering of sketches, the method comprising the steps of: (1) a user activating a sketching application: (2) in response, the sketching application displaying on a screen a suggestive modeling interface; (3) the sketching application importing a sketch to the suggestive modeling interface; and (4) the sketching application retrieving from a database one or more suggestions based on the sketch. The method is operable to allow a user interactively using the sketching application to create a drawing that is guided by the imported sketch by selectively using one or more image guided drawing tools provided by the sketching application. The present invention is well-suited for three-dimensional modeling applications.
Abstract:
Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is enabled through various user interface (UI) features. In an example embodiment, a curved display is monitored to detect a touch input. If a touch input is detected based on the act of monitoring, then one or more locations of the touch input are determined. Responsive to the determined one or more locations of the touch input, at least one user UI feature is implemented. Example UI features include an orb-like invocation gesture feature, a rotation-based dragging feature, a send-to-dark-side interaction feature, and an object representation and manipulation by proxy representation feature.
Abstract:
A machine learning model is trained by instructing a user to perform proscribed gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model.
Abstract:
The system displays an image portraying a virtual space as viewed by a virtual camera at a first location on a spatially navigable camera surface within the virtual space. A user begins a drag operation. Based on the dragging, the virtual camera is spatially translated from the first location on the spatially navigable region to a second location on the spatially navigable region. The orientation of the virtual camera at the second location may be automatically set to either point towards the pre-defined look-at point or to point in a direction normal to the spatially navigable region at the second location. The system then displays an image portraying the virtual space in accordance with the location and orientation of the virtual camera at the second location in the spatially navigable camera surface. While the drag operation continues, the system determines that further translating the virtual camera would place the virtual camera beyond the spatially navigable region. In response, the system begins displaying a transition, which may be an interpolated animation of the virtual camera, an animation semi-transparently blended with a slate, a pre-authored animation of the virtual camera, or other visual effect. While further continuing to drag, and based on the same, the system either advances display of the transition, reverses display of the transition, or otherwise temporally controls the display of the transition.
Abstract:
The present invention is a system that creates a volumetric display and a user controllable volumetric pointer within the volumetric display. The user can point by aiming a beam which is vector, planar or tangent based, positioning a device in three-dimensions in association with the display, touching a digitizing surface of the display enclosure or otherwise inputting position coordinates. The cursor can take a number of different forms including a ray, a point, a volume and a plane. The ray can include a ring, a bead, a segmented wand, a cone and a cylinder. The user designates an input position and the system maps the input position to a 3D cursor position within the volumetric display. The system also determines whether any object has been designated by the cursor by determining whether the object is within a region of influence of the cursor. The system also performs any function activated in association with the designation.
Abstract:
The present invention is a system that creates a volumetric display and a user controllable volumetric pointer within the volumetric display. The user can point by aiming a beam which is vector, planar or tangent based, positioning a device in three-dimensions in association with the display, touching a digitizing surface of the display enclosure or otherwise inputting position coordinates. The cursor can take a number of different forms including a ray, a point, a volume and a plane. The ray can include a ring, a bead, a segmented wand, a cone and a cylinder. The user designates an input position and the system maps the input position to a 3D cursor position within the volumetric display. The system also determines whether any object has been designated by the cursor by determining whether the object is within a region of influence of the cursor. The system also performs any function activated in association with the designation.
Abstract:
The present invention is a system that allows a user to physically rotate a three-dimensional volumetric display enclosure with a corresponding rotation of the display contents. The rotation of the enclosure is sampled with an encoder and the display is virtually rotated by a computer maintaining the scene by an amount corresponding to the physical rotation before being rendered. This allows the user to remain in one position while viewing different parts of the displayed scene corresponding to different viewpoints. The display contents can be rotated in direct correspondence with the display enclosure or with a gain (positive or negative) that accelerates the rotation of the contents with respect to the physical rotation of the enclosure. Any display widgets in the scene, such as a virtual keyboard, can be maintained stationary with respect to the user while scene contents rotate by applying a negative rotational gain to the widgets. The rotation can also be controlled by a time value such that the rotation continues until a specified time is reached or expires.
Abstract:
Assisting input from a keyboard is described. In an embodiment, a processor receives a plurality of key-presses from the keyboard comprising alphanumeric data for input to application software executed at the processor. The processor analyzes the plurality of key-presses to detect at least one predefined typing pattern, and, in response, controls a display device to display a representation of at least a portion of the keyboard in association with a user interface of the application software. In another embodiment, a computer device has a keyboard and at least one sensor arranged to monitor at least a subset of keys on the keyboard, and detect an object within a predefined distance of a selected key prior to activation of the selected key. The processor then controls the display device to display a representation of a portion of the keyboard comprising the selected key.
Abstract:
A method, system and computer program for organizing and visualizing display objects within a virtual environment is provided. In one aspect, attributes of display objects define the interaction between display objects according to pre-determined rules, including rules simulating real world mechanics, thereby enabling enriched user interaction. The present invention further provides for the use of piles as an organizational entity for desktop objects. The present invention further provides for fluid interaction techniques for committing actions on display objects in a virtual interface. A number of other interaction and visualization techniques are disclosed.
Abstract:
The present invention is a system that manages a volumetric display using volume windows. The volume windows have the typical functions, such as minimize, resize, etc., which operate in a volume. When initiated by an application a volume window is assigned to the application in a volume window data structure. Application data produced by the application is assigned to the windows responsive to which applications are assigned to which windows in the volume window data structure. Input events are assigned to the windows responsive to whether they are spatial or non-spatial. Spatial events are assigned to the window surrounding the event or cursor where a policy resolves situations where more than one window surrounds the cursor. Non-spatial events are assigned to the active or working window.