Abstract:
In one embodiment of the present invention, a hybrid subsystem orchestrates animated transitions between stereoscopic imaging and non-stereoscopic imaging. In operation, the hybrid subsystem receives frames that represent a three-dimensional object over time. The hybrid subsystem renders the first frame based on a left eye position and then re-renders the first frame based a right eye position. The left eye position and the right eye position are separated by a predetermined distance that is optimized for stereoscopic viewing. As part of rendering and re-rendering subsequent frames, the hybrid subsystem gradually deceases the distance between the left eye position and the right eye position. Upon receiving a final frame in the transition, the hybrid subsystem renders once—to a single eye position. Advantageously, because the rendered three-dimensional object image gradually loses depth throughout the animated transition, the hybrid subsystem minimizes disruptions to the viewing experience.
Abstract:
Embodiments disclosed herein include a method, a non-transitory computer-readable medium, and a system for generating video clips for teaching how to apply a tools in various application programs for editing documents. The method includes identifying one or more characteristic features of a video clip. The method also includes providing the one or more characteristic features to a trained machine learning analysis module. The method further includes evaluating the characteristic features to generate a clip rating. The method also includes determining whether to discard the video clip based on the clip rating.
Abstract:
Techniques for managing authored views. The techniques includes displaying a main window including a model, an authoring panel configured for displaying authored view indicators associated with authored views of the model, and a navigation panel configured for displaying thumbnail representations of authored views associated with the model. The techniques also include based on a user input, accessing an authored view of the model, wherein the authored view includes one of a view-point, a view path and a view surface. The techniques further include displaying the authored view in the main window, an authored view indicator associated with the authored view in the authoring panel, and a thumbnail representation based on the authored view in the navigation panel.
Abstract:
A computing device is configured to generate a graphical user interface (GUI) that allows an end-user to search for particular document or documents within a large collection of documents. The GUI provides a view of the overall document collection and affords the end-user the ability to reduce the number of visual document thumbnails by means of keyword search. When the end-user identifies a candidate among the reduced number of thumbnails, the end-user may select the page view of the candidate document and conduct further review. If the selected candidate is not the target document, the end-user may select adjacent documents to seamlessly transition between reading and searching. An advantage to this approach is that the visual qualities of the documents, such as images, graphical layout, and color, among others, may be incorporated into the search process. Searching for a particular target document is, thus, expedited.
Abstract:
In one embodiment of the present invention, at least a portion of a keyboard is displayed on a touch-screen display. A first action performed via the touch-screen display is detected. Based on the detected first action, a region of the at least a portion of the keyboard is displayed on the touch-screen display. A second action performed via the touch-screen display is detected. Based on the second action, a character may be selected or the full keyboard may be re-displayed. The first action and the second action may be performed anywhere on the touch display.
Abstract:
A sketch-based interface within an animation engine provides an end-user with tools for creating emitter textures and oscillator textures. The end-user may create an emitter texture by sketching one or more patch elements and then sketching an emitter. The animation engine animates the sketch by generating a stream of patch elements that emanate from the emitter. The end-user may create an oscillator texture by sketching a patch that includes one or more patch elements, and then sketching a brush skeleton and an oscillation skeleton. The animation engine replicates the patch along the brush skeleton, and then interpolates the replicated patches between the brush skeleton and the oscillation skeleton, thereby causing those replicated patches to periodically oscillate between the two skeletons.
Abstract:
One embodiment of the invention disclosed herein provides a system that includes a mirror apparatus with a first surface to which a half-silvered mirror film is applied, where the mirror apparatus transmits a transmitted image from a second surface to the first surface. The system further includes a servo-controlled dimmer that adjusts a level of ambient light associated with the mirror apparatus. The system further includes a motion sensing device that tracks positions of a plurality of points associated with an object; wherein the object is situated on the half-silvered mirror film side of the mirror apparatus. The system further includes a computing device including a memory that stores instructions that, when executed by a processor included in the computing device, cause the processor to control the servo-controlled dimmer to adjust the ambient light such that both the transmitted image and a reflected image is visible on the first surface.
Abstract:
A technique for remote mixed-reality interaction between users includes determining a first position of a first object within a first three-dimensional (3D) space; generating first information associated with the first 3D space for the first object based on the first position; transmitting the first information to a computing device that renders first video content for display within a second 3D space based on the first information; and while transmitting the first information to the computing device, receiving second information that is associated with the second 3D space and with a second object and generated based on a second position determined for the second object within the second 3D space.
Abstract:
An automated robot design pipeline facilitates the overall process of designing robots that perform various desired behaviors. The disclosed pipeline includes four stages. In the first stage, a generative engine samples a design space to generate a large number of robot designs. In the second stage, a metric engine generates behavioral metrics indicating a degree to which each robot design performs the desired behaviors. In the third stage, a mapping engine generates a behavior predictor that can predict the behavioral metrics for any given robot design. In the fourth stage, a design engine generates a graphical user interface (GUI) that guides the user in performing behavior-driven design of a robot. One advantage of the disclosed approach is that the user need not have specialized skills in either graphic design or programming to generate designs for robots that perform specific behaviors or express various emotions.
Abstract:
A design engine for designing an article to be worn on a human body part (input canvas) in a virtual environment. A virtual model engine of the design engine is used to generate and modify a virtual model of the input canvas and a virtual model of the article based on skin-based gesture inputs detected by an input processing engine. The gesture inputs comprise contacts between an input tool and the input canvas at locations on the input canvas. The virtual model engine may implement different design modes for receiving and processing gesture inputs for designing the article, including direct manipulation, generative manipulation, and parametric manipulation modes. In all three modes, a resulting virtual model of the article is based on physical geometries of at least part of the input canvas. The resulting virtual model of the article is exportable to a fabrication device for physical fabrication of the article.