Abstract:
One embodiment of the present invention sets forth a technique for designing and generating a smart object. The technique includes receiving a first input indicating a smart object behavior of a smart object that includes a smart device embedded in a three-dimensional (3D) object; in response to the input, generating computer instructions for the smart device, wherein the computer instructions, when executed by the smart device, cause the smart object to implement the smart object behavior; and transmitting the computer instructions to the smart device.
Abstract:
In one embodiment, an enclosure generator automatically generates an enclosure for a device based on a three-dimensional (3D) model of a target surface and component instances that are associated with different positions within the device. In operation, the enclosure generator computes a surface region based on the target surface and the component instances. Subsequently, the enclosure generator computes a front panel model and a back structure model based on the surface region. Notably, the back structure model includes support structure geometries. Together, the front panel model and the back structure model comprise an enclosure model. The enclosure generator then stores the enclosure model or transmits the enclosure model to a 3D fabrication device. Advantageously, unlike conventional, primarily manual approaches to enclosure generation, the enclosure generator does not rely on the user possessing any significant technical expertise.
Abstract:
In one embodiment, a system automatically generates a retrofit device based on a three-dimensional (3D) model of a legacy device. In operation, a physical design engine generates component instances based on legacy interface instances included in the legacy device. The physical design engine then generates an enclosure model that specifies an enclosure that houses the component instances. The physical design engine also generates computer code that is associated with a programmable instance as well as relatively simple assembly instructions for assembling a retrofit device that includes the enclosure, the legacy device, the component instances, and the programmable instance. Notably an user may configure an automated fabrication tool to generate the enclosure. Consequently, the system provides an automated design process for retrofitting legacy devices that does not rely on the user possessing any significant technical expertise.
Abstract:
In one embodiment, a device generator automatically generates a circuit, firmware, and assembly instructions for a programmed electronic device based on behaviors that are specified via mappings between triggers and actions. In operation, the device generator generates a circuit based on the mappings. The circuit specifies instances of electronic components and interconnections between the instances. Subsequently, the device generator generates firmware based on code fragments associated with the triggers and actions included in the mappings that specify the high-level behavior. In addition the device generator generates assembly instructions based on the interconnections between the instances. Advantageously, the device generator provides an automated, intuitive design process for programmed electronic devices that does not rely on the designers possessing any significant technical expertise. By contrast, conventional design processes for programmed electronic devices typically only automate certain steps of the design process, require specialized knowledge, and/or are limited in applicability.
Abstract:
In one embodiment of the present invention, a motion effect generator enables the creation of tangible representations of the motion of three-dimensional (3D) animated models for 3D printing. In operation, the motion effect generator receives a 3D animated model and animates the model through a configurable interval of time. As the motion effect generator animates the model, the motion effect generator applies a motion depiction technique to one or more selected components included in the model—explicitly portraying the motion of the 3D animated model as static motion effect geometries. Subsequently, based on the motion effect geometries, the motion effect generator creates a 3D motion sculpture model that is amenable to 3D printing. By automating the design of motion sculpture models, the motion effect generator reduces the time, sculpting expertise, and familiarity with 3D printer fabrication constraints typically required to create motion sculpture models using conventional, primarily manual design techniques.
Abstract:
Disclosed is a technique for generating chronological event information. The technique involves receiving event data comprising a plurality of events, where each event is associated with a different position in a video stream. The technique further involves determining that a current playhead position in the video stream corresponds to a first position associated with a first event, and, in response, causing the first event to be displayed in an event list as a current event, causing a second event to be displayed in the event list as a previous event, where the second event is associated with a second position in the video stream that is before the first position, and causing a third event to be displayed in the event list as a next event, where the third event is associated with a third position in the video stream that is after the first position.
Abstract:
One embodiment of the invention disclosed herein provides techniques for assisting with performing a task within a smart workspace environment. A smart workspace system includes a memory that includes a workspace management application. The smart workspace system further includes a processor that is coupled to the memory and, upon executing the workspace management application, is configured to perform various steps. The processor detects that a first step included in a plurality of steps associated with a task is being performed. The processor displays one or more information panels associated with performing the current step. The processor further communicates with augmented safety glasses, augmented tools, and an augmented toolkit to safely and efficiently through a series of steps to complete the task.
Abstract:
A sketch-based interface within an animation engine provides an end-user with tools for creating emitter textures and oscillator textures. The end-user may create an emitter texture by sketching one or more patch elements and then sketching an emitter. The animation engine animates the sketch by generating a stream of patch elements that emanate from the emitter. The end-user may create an oscillator texture by sketching a patch that includes one or more patch elements, and then sketching a brush skeleton and an oscillation skeleton. The animation engine replicates the patch along the brush skeleton, and then interpolates the replicated patches between the brush skeleton and the oscillation skeleton, thereby causing those replicated patches to periodically oscillate between the two skeletons.
Abstract:
A video processing engine is configured to generate a graphical user interface (GUI) that allows an end-user of the video processing engine to select a specific video and search through the specific video to detect a desired target scene. The video processing engine provides a grid array of video thumbnails that are configured to each display a segment of the video so that multiple scenes may be visually scanned simultaneously. When the end-user identifies a scene within a video thumbnail that may be the desired target scene, the end-user may launch the content of the video thumbnail in full-screen mode to verify that the scene is in fact the desired target scene. An advantage of the approach described herein is that the video processing engine provides a sampled overview of the video in its entirety, thus enabling the end-user to more effectively scrub the video for the desired target scene.
Abstract:
One embodiment of the invention disclosed herein provides techniques for controlling a movement training environment. A movement training system retrieves a movement object from a set of movement objects. The movement training system attains first motion capture data associated with a first user performing a movement based on the movement object. The movement training system generates a first articulable representation based on the first motion capture data. The movement training system compares at least one first joint position related to the first articulable representation with at least one second joint position related to a second articulable representation associated with the movement object. The movement training system calculates a first similarity score based on a difference between the at least one first joint position and the at least one second joint position.