Abstract:
One embodiment of a computer-implemented method for automatically generating command recommendations for a software workflow comprises identifying a plurality of command sequences stored in a database based on a current command being interacted with in a graphical user interface; computing a score for each command sequence included in the plurality of command sequences based on one or more commands included in the command sequence and one or more commands included in a command history; determining at least one command sequence included in the plurality of command sequences to output based on the scores; and outputting the at least one command sequence for display.
Abstract:
A hybrid workstation enables a virtual reality (VR) interface, a traditional (TD) interface, and transitions between the interfaces. The VR interface comprises three-dimensional (3D)-based software and hardware components. The TD interface comprises two-dimensional (2D)-based software and hardware components. The state of the hybrid workstation is defined by three parameters comprising interface (VR interface or TD interface), position (seated or standing), and movement (stationary or room-scale). The hybrid workstation detects a transition from a current state to a next state upon determining that any of the three parameters have changed. The hybrid workstation then determines a transition response based on the particular transition that is detected. The transition response comprises a set of operations that are performed on the VR interface and/or the TD interface that mitigate the disruption and inefficiency caused when the particular transition occurs.
Abstract:
An automated robot design pipeline facilitates the overall process of designing robots that perform various desired behaviors. The disclosed pipeline includes four stages. In the first stage, a generative engine samples a design space to generate a large number of robot designs. In the second stage, a metric engine generates behavioral metrics indicating a degree to which each robot design performs the desired behaviors. In the third stage, a mapping engine generates a behavior predictor that can predict the behavioral metrics for any given robot design. In the fourth stage, a design engine generates a graphical user interface (GUI) that guides the user in performing behavior-driven design of a robot. One advantage of the disclosed approach is that the user need not have specialized skills in either graphic design or programming to generate designs for robots that perform specific behaviors or express various emotions.
Abstract:
In various embodiments, a scheduling application automatically determines the timing of linearly dependent events. In operation, the scheduling application detects that a first event included in an original scheduled sequence of events has not completed by a scheduled completion time based on a current time. The scheduling application then determines that a second event included in the original scheduled sequence of events has a dependency on the completion of the first event. Subsequently, the scheduling application updates one or more temporal properties associated with the second event based on the current time to generate a third event. The scheduling application then generates, via a processor, a modified scheduled sequence of events that includes the third event instead of the second event. Advantageously, automatically adjusting the timing of linear dependent events based on the current time reduces inefficiencies associated with conventional scheduling techniques.
Abstract:
In various embodiments, a dataset generation application generates a new dataset based on an original dataset. The dataset generation engine perturbs a first data item included in the original dataset to generate a second data item. The dataset generation application then generates a test dataset based on the original dataset and the second data item. The test dataset includes the second data item instead of the first data item. Subsequently, the dataset generation application determines that the test dataset is characterized by a first property value that is substantially similar to a second property value that characterizes the original dataset. The first property value and the second property value are associated with the same property. Finally, the dataset generation application generates a new dataset based on the test dataset. The new dataset conveys aspect(s) of the original dataset without revealing the first data item.
Abstract:
Approaches for generating virtual representations of smart objects in a 3D visual programming interface. The interface displays a 3D virtual environment containing virtual objects that represents a real-world environment containing smart objects. The 3D virtual environment displays virtual objects in a manner that is spatially accurate relative to the physical objects in the real-world environment. For each virtual object representing a physical object, a logic node (port node) is displayed, the port node representing the set of functions associated with the physical object. The interface enables users to create, delete, or modify different types of logic nodes (representing functions) and create, delete, or modify links (representing data connections) between logic nodes within the 3D virtual environment. The authoring of the logic nodes and links produces an executable program. Upon executing the program, data flows between the logic nodes are visually represented as particles moving between the logic nodes.
Abstract:
In one embodiment of the present invention, a gesture recognition application enables interactive entry via a touch pad. In operation, the gesture recognition application partitions the touch pad into multiple zones. Upon detecting a gesture via the touch pad, the gesture recognition application determines whether the gesture is zone-specific. If the gesture is zone-specific, then the gesture recognition application determines the zone based on the location of the gesture and then selects an input group based on the zone and the type of gesture. If the gesture is zone-agnostic, then the gesture recognition application selects an input group based on the type of gesture, irrespective of the location of the gesture. Advantageously, by providing zone-specific gesture recognition, the gesture recognition application increases the usability of touch pads with form factors that limit the type of gestures that can be efficiently performed via the touch pad.
Abstract:
One embodiment of the invention is a collage engine that generates informative viewpoints of a 3D model based upon the editing history of the 3D model. In operation, the collage engine processes an editing log to identify segments of the 3D model that include related vertices. For a given segment, the collage engine selects a viewpoint used by the end-user to edit the 3D model and a viewpoint used by the end-user to inspect the 3D model. The collage engine may then present the informative viewpoints to the end-user for inclusion in a collage of 2D renderings based upon the informative viewpoints. Generally, the viewpoints used while editing and inspecting the 3D model are of importance in the overall presentation of the 3D model. Therefore, collages of 2D renderings based upon the informative viewpoints can be generated effectively.
Abstract:
Embodiments disclosed herein include a method, a non-transitory computer-readable medium, and a system for generating video clips for teaching how to apply a tools in various application programs for editing documents. The method includes identifying one or more characteristic features of a video clip. The method also includes providing the one or more characteristic features to a trained machine learning analysis module. The method further includes evaluating the characteristic features to generate a clip rating. The method also includes determining whether to discard the video clip based on the clip rating.
Abstract:
Techniques for managing authored views. The techniques includes displaying a main window including a model, an authoring panel configured for displaying authored view indicators associated with authored views of the model, and a navigation panel configured for displaying thumbnail representations of authored views associated with the model. The techniques also include based on a user input, accessing an authored view of the model, wherein the authored view includes one of a view-point, a view path and a view surface. The techniques further include displaying the authored view in the main window, an authored view indicator associated with the authored view in the authoring panel, and a thumbnail representation based on the authored view in the navigation panel.