Abstract:
In one embodiment of the present invention, a hybrid software application transitions between stereoscopic imaging and non-stereoscopic imaging to optimize user interactions with a three-dimensional model of a three-dimensional object. Based on user input, the hybrid software application selects an operational mode as either stereoscopic mode or non-stereoscopic mode. The hybrid software application then performs then operations on the three-dimensional model. If the operational mode is the stereoscopic mode, then the hybrid software application generates two, offset images of the three-dimensional object—an image for the right eye and a separate image for the left eye. By contrast, if the operational mode is the non-stereoscopic mode, then the hybrid software application generates a single image of the three-dimensional object that is shared by both eyes. Advantageously, by sagaciously transitioning between stereoscopic imaging and non-stereoscopic imaging, the user viewing experience may be tailored to optimize user productivity for each operation.
Abstract:
In one embodiment of the present invention, a method for multiple device interaction includes detecting an orientation of a first device relative to a second device. The method also includes detecting a first gesture performed with either the first device or the second device, wherein the first gesture causes a first action that is based at least in part on the orientation of the first device relative to the second device.
Abstract:
During a sampling stage, a system enables a user to capture samples of 3D digital components within an immersive environment. The 3D digital component can include a 3D object that is rendered and displayed within the immersive environment. The 3D digital components can also include object-property components used to render a 3D object, such as texture, color scheme, animation, motion path, or physical parameters. The samples of the 3D digital components are stored to a sample-palette data structure (SPDS) that organizes the samples. During a remix stage, the system enables a user to apply a sample stored to the SPDS to modify a 3D object and/or an immersive environment. The user can add a sampled object to an immersive environment to modify the immersive environment. The user can apply one or more object-based samples to a 3D object to modify one or more object properties of the 3D object.
Abstract:
A hybrid workstation enables a virtual reality (VR) interface, a traditional (TD) interface, and transitions between the interfaces. The VR interface comprises three-dimensional (3D)-based software and hardware components. The TD interface comprises two-dimensional (2D)-based software and hardware components. The state of the hybrid workstation is defined by three parameters comprising interface (VR interface or TD interface), position (seated or standing), and movement (stationary or room-scale). The hybrid workstation detects a transition from a current state to a next state upon determining that any of the three parameters have changed. The hybrid workstation then determines a transition response based on the particular transition that is detected. The transition response comprises a set of operations that are performed on the VR interface and/or the TD interface that mitigate the disruption and inefficiency caused when the particular transition occurs.
Abstract:
An automated robot design pipeline facilitates the overall process of designing robots that perform various desired behaviors. The disclosed pipeline includes four stages. In the first stage, a generative engine samples a design space to generate a large number of robot designs. In the second stage, a metric engine generates behavioral metrics indicating a degree to which each robot design performs the desired behaviors. In the third stage, a mapping engine generates a behavior predictor that can predict the behavioral metrics for any given robot design. In the fourth stage, a design engine generates a graphical user interface (GUI) that guides the user in performing behavior-driven design of a robot. One advantage of the disclosed approach is that the user need not have specialized skills in either graphic design or programming to generate designs for robots that perform specific behaviors or express various emotions.
Abstract:
In various embodiments, a task-based recommendation subsystem automatically recommends workflows for software-based tasks based on a trained machine-learning model that maps different sets of commands to different distributions of weights applied to a set of tasks. In operation, the task-based recommendation subsystem applies a first set of commands associated with a target user to the trained machine-learning model to determine a target distribution of weights applied to the set of tasks. The task-based recommendation subsystem then performs processing operation(s) based on at least two different distributions of weights applied to the set of tasks and the target distribution to determine a training item. Subsequently, the task-based recommendation subsystem generates a recommendation that specifies the training item. Finally, the task-based recommendation subsystem transmits the recommendation to a user to assist the user in performing a particular task.
Abstract:
Techniques for gradually transitioning a user to a second navigation scheme while using a first navigation scheme in a 3D design application that generates and displays a 3D virtual environment. The design application initially implements the first navigation scheme and a set of function tools of the standard navigation scheme. The design application monitors for a set of patterns of navigation actions during use of the first-person navigation scheme, each pattern being performed more efficiently when using the standard navigation scheme. Upon detecting a pattern using the first-person navigation scheme, the design application may switch to the standard navigation scheme. Also, upon detecting selection of a function tool, the design application may switch to the standard navigation scheme during use of the function tool. When the function tool is closed, the design application may switch back to the first-person navigation scheme.
Abstract:
Approaches for generating virtual representations of smart objects in a 3D visual programming interface. The interface displays a 3D virtual environment containing virtual objects that represents a real-world environment containing smart objects. The 3D virtual environment displays virtual objects in a manner that is spatially accurate relative to the physical objects in the real-world environment. For each virtual object representing a physical object, a logic node (port node) is displayed, the port node representing the set of functions associated with the physical object. The interface enables users to create, delete, or modify different types of logic nodes (representing functions) and create, delete, or modify links (representing data connections) between logic nodes within the 3D virtual environment. The authoring of the logic nodes and links produces an executable program. Upon executing the program, data flows between the logic nodes are visually represented as particles moving between the logic nodes.
Abstract:
In one embodiment of the present invention, a gesture recognition application enables interactive entry via a touch pad. In operation, the gesture recognition application partitions the touch pad into multiple zones. Upon detecting a gesture via the touch pad, the gesture recognition application determines whether the gesture is zone-specific. If the gesture is zone-specific, then the gesture recognition application determines the zone based on the location of the gesture and then selects an input group based on the zone and the type of gesture. If the gesture is zone-agnostic, then the gesture recognition application selects an input group based on the type of gesture, irrespective of the location of the gesture. Advantageously, by providing zone-specific gesture recognition, the gesture recognition application increases the usability of touch pads with form factors that limit the type of gestures that can be efficiently performed via the touch pad.
Abstract:
One embodiment of the invention is a collage engine that generates informative viewpoints of a 3D model based upon the editing history of the 3D model. In operation, the collage engine processes an editing log to identify segments of the 3D model that include related vertices. For a given segment, the collage engine selects a viewpoint used by the end-user to edit the 3D model and a viewpoint used by the end-user to inspect the 3D model. The collage engine may then present the informative viewpoints to the end-user for inclusion in a collage of 2D renderings based upon the informative viewpoints. Generally, the viewpoints used while editing and inspecting the 3D model are of importance in the overall presentation of the 3D model. Therefore, collages of 2D renderings based upon the informative viewpoints can be generated effectively.