Abstract:
A sketch-based interface within an animation engine provides an end-user with tools for creating emitter textures and oscillator textures. The end-user may create an emitter texture by sketching one or more patch elements and then sketching an emitter. The animation engine animates the sketch by generating a stream of patch elements that emanate from the emitter. The end-user may create an oscillator texture by sketching a patch that includes one or more patch elements, and then sketching a brush skeleton and an oscillation skeleton. The animation engine replicates the patch along the brush skeleton, and then interpolates the replicated patches between the brush skeleton and the oscillation skeleton, thereby causing those replicated patches to periodically oscillate between the two skeletons.
Abstract:
A computing device for processing a video file. The video file comprises an audio track and contains at least one event comprising a scene of interest. One or more audio criteria that characterize the event are used to detect events using the audio track and an offset timestamp is recorded for each detected event. A set of offset timestamps may be produced for a set of detected events of the video file. The set of offset timestamps for the set of detected events may be used to time align and time adjust a set of real timestamps for a set of established events for the same video file. A user interface (UI) is provided that allows quick and easy search and playback of events of interest across multiple video files.
Abstract:
In various embodiments, a wearable object engine generates wearable objects. The wearable object engine represents a digital design of a wearable object as toolpaths. In operation, the wearable object engine generates visual guidance that indicates a portion of the design based on the toolpaths, a configuration associated a nozzle of a fabrication device, and a configuration associated with a portion of a human body. The wearable object engine causes the visual guidance to be displayed on the portion of the human body. As the nozzle moves over the portion of the human body, the nozzle extrudes fabrication material that forms the portion of the wearable object directly on the portion of the human body. Advantageously, a designer may control the nozzle to fabricate the wearable object while receiving visual guidance based on the digital design.
Abstract:
One embodiment of the present invention sets forth a technique for playing sequential video streams. The technique involves initiating playback of a first video stream within a foreground of a display region and loading at least a portion of a second video stream during the playback of the first video stream. The technique further involves detects an event associated with the playback of the first video stream, and, in response, initiating playback of the second video stream within the foreground of the display region.
Abstract:
A design engine for designing an article to be worn on a human body part (input canvas) in a virtual environment. A virtual model engine of the design engine is used to generate and modify a virtual model of the input canvas and a virtual model of the article based on skin-based gesture inputs detected by an input processing engine. The gesture inputs comprise contacts between an input tool and the input canvas at locations on the input canvas. The virtual model engine may implement different design modes for receiving and processing gesture inputs for designing the article, including direct manipulation, generative manipulation, and parametric manipulation modes. In all three modes, a resulting virtual model of the article is based on physical geometries of at least part of the input canvas. The resulting virtual model of the article is exportable to a fabrication device for physical fabrication of the article.
Abstract:
A design engine for designing an article to be worn on a human body part (input canvas) in a virtual environment. A virtual model engine of the design engine is used to generate and modify a virtual model of the input canvas and a virtual model of the article based on skin-based gesture inputs detected by an input processing engine. The gesture inputs comprise contacts between an input tool and the input canvas at locations on the input canvas. The virtual model engine may implement different design modes for receiving and processing gesture inputs for designing the article, including direct manipulation, generative manipulation, and parametric manipulation modes. In all three modes, a resulting virtual model of the article is based on physical geometries of at least part of the input canvas. The resulting virtual model of the article is exportable to a fabrication device for physical fabrication of the article.
Abstract:
A computing device for processing a video file. The video file comprises an audio track and contains at least one event comprising a scene of interest. One or more audio criteria that characterize the event are used to detect events using the audio track and an offset timestamp is recorded for each detected event. A set of offset timestamps may be produced for a set of detected events of the video file. The set of offset timestamps for the set of detected events may be used to time align and time adjust a set of real timestamps for a set of established events for the same video file. A user interface (UI) is provided that allows quick and easy search and playback of events of interest across multiple video files.
Abstract:
An apparatus for viewing a stereoscopic display comprises a frame chassis, a hinge mechanism, a left lens assembly, a right lens assembly, and a sensor array. The hinge mechanism allows the left lens assembly and the right lens assembly to switch from a first orientation to a second orientation. The left lens assembly is coupled to the frame chassis via the hinge mechanism and is configured to be transparent to a first image output by the stereoscopic display and opaque to a second image output from the stereoscopic display, while the right lens assembly is coupled to the frame chassis via the hinge mechanism and is configured to be transparent to the second image output and opaque to the first image output. The sensor array is positioned to detect a current orientation of the left lens and the right lens.
Abstract:
A computer-implemented method for stereoscopically displaying content includes determining a first position of an object within in a region of display space proximate to a stereoscopic display device and calculating a second position of a virtual object in the region. The method further includes determining an occluded portion of the virtual object that is occluded the object when the virtual object is disposed at the second position and causing the display device to stereoscopically render for display one or more portions of the virtual object that do not include the occluded portion. One advantage of the disclosed method is that a viewer can perform direct touch operations with stereoscopically displayed (3D) content with reduced visual discomfort.
Abstract:
One embodiment of the present invention sets forth a technique for generating a status update message. The method involves defining one or more status update criteria and monitoring user activity in a software application for the one or more status update criteria. The method further involves determining, based on the user activity, that the one or more status update criteria have been met and generating, via a processing unit, a status update message. The status update message includes multimedia content related to a project associated with the software application.