Abstract:
In various embodiments, a sketching application generates models of three-dimensional (3D) objects. In operation, the sketching application generates a first virtual geometry based on a first free-form gesture. Subsequently, the sketching application generates a second virtual geometry based on a first constrained gesture associated with a two-dimensional (2D) physical surface. The sketching application then generates a model of a 3D object based on the first virtual geometry and the second virtual geometry. Advantageously, because the sketching application generates virtual geometries based on a combination of free-form and constrained gestures, the sketching application efficiently generates accurate models of detailed 3D objects.
Abstract:
In one embodiment, a banded slider application obtains values from users via a banded slider. In operation, the banded slider application generates a banded slider that includes multiple sections. Notably, the interior of a section included in the banded slider is visually distinguishable from an interior of another section that is adjacent to the section. Subsequently, the banded slider application performs operation(s) to display the banded slider and, in response, receives a user selection of a location along the banded slider. The banded slider application then computes a specified value based on the location. Advantageously, empirical evidence shows that the banded slider enables precise and/or repeatable specification of values without inducing bias associated with an inherent propensity for users to select locations that are at or near the decorations (e.g., tick marks) along conventional sliders.
Abstract:
Approaches for generating virtual representations of smart objects in a 3D visual programming interface. The interface displays a 3D virtual environment containing virtual objects that represents a real-world environment containing smart objects. The 3D virtual environment displays virtual objects in a manner that is spatially accurate relative to the physical objects in the real-world environment. For each virtual object representing a physical object, a logic node (port node) is displayed, the port node representing the set of functions associated with the physical object. The interface enables users to create, delete, or modify different types of logic nodes (representing functions) and create, delete, or modify links (representing data connections) between logic nodes within the 3D virtual environment. The authoring of the logic nodes and links produces an executable program. Upon executing the program, data flows between the logic nodes are visually represented as particles moving between the logic nodes.
Abstract:
A finger-mounted stylus for performing touch-based input on a touchscreen includes a fingertip case configured to attach to a user fingertip, an extension arm that is coupled to the fingertip case and includes a conductive tip, wherein the extension arm is configured to position the conductive tip away from the fingertip case, and control circuitry configured to apply an electric charge to the conductive tip when the conductive tip is in contact with or proximate to the touchscreen.
Abstract:
A finger-mounted stylus for performing touch-based input on a touchscreen includes a fingertip case configured to attach to a user fingertip, an extension arm that is coupled to the fingertip case and includes a conductive tip, wherein the extension arm is configured to position the conductive tip away from the fingertip case, and control circuitry configured to apply an electric charge to the conductive tip when the conductive tip is in contact with or proximate to the touchscreen.
Abstract:
An opacity engine for automatically and dynamically setting an opacity level for a scatterplot based on a predetermined value for a mean opacity level of utilized pixels (MOUP) in the scatterplot. The opacity engine may automatically set the opacity level for the scatterplot to produce the predetermined MOUP value in the scatterplot. A utilized pixel in the scatterplot comprises a pixel displaying at least one data point representing data. The MOUP value in the scatterplot may be equal to the sum of the final opacity levels of all utilized pixels in the chart, divided by the number of utilized pixels in the chart. The predetermined MOUP value may be between 35%-45%, such as 40%. The opacity engine may adjust the determined opacity level for charts having relatively low over-plotting factors.
Abstract:
One embodiment of the invention disclosed herein provides techniques for assisting with performing a task within a smart workspace environment. A smart workspace system includes a memory that includes a workspace management application. The smart workspace system further includes a processor that is coupled to the memory and, upon executing the workspace management application, is configured to perform various steps. The processor detects that a first step included in a plurality of steps associated with a task is being performed. The processor displays one or more information panels associated with performing the current step. The processor further communicates with augmented safety glasses, augmented tools, and an augmented toolkit to safely and efficiently through a series of steps to complete the task.
Abstract:
One embodiment of the present invention sets forth a technique for providing application command recommendations to a privacy-sensitive client device. The technique includes receiving a command log from each general client device included in a plurality of general client devices and analyzing the command logs to generate a command recommendation file. The command recommendation file may indicate a relationship between one or more application commands executed by at least one of the general client devices and one or more application commands that are available for execution by the privacy-sensitive client device. The technique further includes transmitting the command recommendation file to the privacy-sensitive client device.
Abstract:
The disclosed pen-mouse is a tracking menu that tracks the position of the pen. A pen cursor that corresponds to the pen is moved about within the pen-mouse graphic by the pen and the pen-mouse remains stationary. The pen-mouse is moved when the location of the pen encounters a tracking boundary of the pen-mouse. The tracking boundary coincides with the graphic representing the mouse. While moving within the pen-mouse, the pen can select objects within the pen-mouse body, such as buttons, wheels, etc. The selection of a button or other virtual control causes a corresponding computer mouse button function to be executed. The execution is directed at any object designated by a pen-mouse tracking symbol, such as an arrow, that is part of the pen mouse graphic. The pen-mouse emulates functions or operations of a mouse including single button clicks, double button clicks, finger wheels, track balls, etc.
Abstract:
A computer-implemented method for traversing a video file includes populating a two-dimensional array with representative images corresponding to a portion of the video and causing the two-dimensional array to be displayed. The two-dimensional array includes a location indicator configured to traverse the two-dimensional array in a direction parallel with one dimension of the two-dimensional array in response to navigation information associated with the portion of the video. The location indicator is further configured to indicate a position in the video by highlighting one of the representative images populating the two-dimensional array. Because an end-user is provided with a large set of statically displayed representative images during navigation of a video timeline, the end-user can visually identify a desired target scene, even when traversing a the timeline relatively quickly.