Abstract:
In various embodiments, a computer-implemented method for generating a design object comprises generating a prompt within a design space generated by a design exploration application, wherein the prompt has a prompt definition that includes at least design intent text, and a prompt volume that occupies a portion of the design space and exerts a sphere of influence within the prompt volume, executing a trained machine learning (ML) model on the prompt to generate the design object, and displaying the design object within the prompt volume.
Abstract:
In various embodiments a computer-implemented method for providing sustainability insights to a user designing an object. The method includes determining a first value of a sustainability metric associated with a design of an object, displaying, via a graphical user interface (GUI), a visual indication of the first value of the sustainability metric, and detecting a change to the design of the object. The method further includes, in response to detecting the change to the design of the object, determining a second value of the sustainability metric and displaying, via the GUI, a visual indication of the second value of the sustainability metric.
Abstract:
A feedback mechanism that reports software issues between users of software applications and the developers of the software applications. The feedback mechanism generates feedback logs that capture moments of user frustration at the moment a user encounters issues with using a particular software application executing on a client device. The feedback mechanism is triggered to generate a feedback log by the user via a predetermined set of user inputs. Once generated, the feedback log captures an associated importance level, a user description, and/or context information (such as application and command activity information) for the particular software application and one or more other software applications that interacted with the particular software application executing on the client device. The feedback log can also capture multimedia content such as audio, images, and videos. The feedback log is then transmitted to a server of a developer of the particular software application.
Abstract:
One embodiment of a computer-implemented method for automatically tracking how extensively software application commands have been investigated comprises identifying an interaction with a first command occurring within a graphical user interface, wherein the first command is associated with one or more command parameters; updating a command history associated with the first command based on the interaction with the first command; computing a progress level associated with the first command based on the command history, wherein the progress level indicates how many command parameters included in the one or more command parameters have been modified; determining a coverage level associated with the first command based on the command history; and outputting at least one of the use level or the progress level for display in the graphical user interface.
Abstract:
One embodiment of a computer-implemented method for executing software application commands on practice data comprises identifying a command demonstration that is stored in a database based on a current command being interacted with in a graphical user interface, wherein the command demonstration is associated with sample application data; receiving a selection of whether to execute the command demonstration on the sample application data or current application data; causing the command demonstration to be executed on either the sample application data or a copy of current application data to generate modified data; and causing the modified data to be output within the graphical user interface.
Abstract:
An automated robot design pipeline facilitates the overall process of designing robots that perform various desired behaviors. The disclosed pipeline includes four stages. In the first stage, a generative engine samples a design space to generate a large number of robot designs. In the second stage, a metric engine generates behavioral metrics indicating a degree to which each robot design performs the desired behaviors. In the third stage, a mapping engine generates a behavior predictor that can predict the behavioral metrics for any given robot design. In the fourth stage, a design engine generates a graphical user interface (GUI) that guides the user in performing behavior-driven design of a robot. One advantage of the disclosed approach is that the user need not have specialized skills in either graphic design or programming to generate designs for robots that perform specific behaviors or express various emotions.
Abstract:
An opacity engine for automatically and dynamically setting an opacity level for a scatterplot based on a predetermined value for a mean opacity level of utilized pixels (MOUP) in the scatterplot. The opacity engine may automatically set the opacity level for the scatterplot to produce the predetermined MOUP value in the scatterplot. A utilized pixel in the scatterplot comprises a pixel displaying at least one data point representing data. The MOUP value in the scatterplot may be equal to the sum of the final opacity levels of all utilized pixels in the chart, divided by the number of utilized pixels in the chart. The predetermined MOUP value may be between 35%-45%, such as 40%. The opacity engine may adjust the determined opacity level for charts having relatively low over-plotting factors.
Abstract:
Disclosed is a technique for generating chronological event information. The technique involves receiving event data comprising a plurality of events, where each event is associated with a different position in a video stream. The technique further involves determining that a current playhead position in the video stream corresponds to a first position associated with a first event, and, in response, causing the first event to be displayed in an event list as a current event, causing a second event to be displayed in the event list as a previous event, where the second event is associated with a second position in the video stream that is before the first position, and causing a third event to be displayed in the event list as a next event, where the third event is associated with a third position in the video stream that is after the first position.
Abstract:
A visualization engine is configured to generate a network visualization that represents the evolution of a network over time. The visualization engine generates the network visualization based on a network dataset that describes various nodes within the network, and links between those nodes, over a sequence of time intervals. Initially, the visualization engine generates a stable simulated network based on initial network data, and then subsequently animates changes to that simulated network that derive from differences between the initial network data and subsequent network data. The visualization engine visually indicates changes to different nodes in the network via color changes, size changes, and other changes to the appearance of nodes.
Abstract:
A video processing engine is configured to generate a graphical user interface (GUI) that allows an end-user of the video processing engine to select a specific video and search through the specific video to detect a desired target scene. The video processing engine provides a grid array of video thumbnails that are configured to each display a segment of the video so that multiple scenes may be visually scanned simultaneously. When the end-user identifies a scene within a video thumbnail that may be the desired target scene, the end-user may launch the content of the video thumbnail in full-screen mode to verify that the scene is in fact the desired target scene. An advantage of the approach described herein is that the video processing engine provides a sampled overview of the video in its entirety, thus enabling the end-user to more effectively scrub the video for the desired target scene.