Abstract:
One embodiment involves receiving, by a web page authoring tool, presentation information in a markup language corresponding to a static graphical object. In this embodiment, the web page authoring tool receives animation information in a data interchange format corresponding to an adjustment for the static graphical object. In this embodiment, the web page authoring tool receives a runtime engine. In this embodiment, the web page authoring tool stores the presentation information, the animation information, and the runtime engine within a web page. The runtime engine may be configured to cause a web browser displaying the web page to render an animation. The animation can be based at least in part on the presentation information and the animation information.
Abstract:
Digital content interaction and navigation techniques and systems in virtual and augmented reality are described. In one example, techniques are employed to aid user interaction within a physical environment in which the user is disposed while viewing a virtual or augmented reality environment. In another example, techniques are described to support a world relative field of view and a fixed relative field of view. The world relative field of view is configured to follow motion of the user (e.g., movement of the user's head or mobile phone) within the environment to support navigation to different locations within the environment. The fixed relative field of view is configured to remain fixed during this navigation such that digital content disposed in this field of view remains at that relative location to a user's field of view.
Abstract:
Digital content rendering coordination techniques in augmented reality are described. In one example, a user is provided with a first display device via which an augmented reality environment is to be viewed that includes at least a partial view of a physical environment. As part of this physical environment, a second display device (e.g., a desktop monitor, a mobile phone, and so forth) is also viewable by a user through the first display device, i.e., is directly viewable. Techniques are described herein in which a view of digital content on the second display device is coordinated with a display of digital content on the first display device. This may be used to support a variety of usage scenarios to expand and share functionality associated with these different devices.
Abstract:
Digital content interaction and navigation techniques and systems in virtual and augmented reality are described. In one example, techniques are employed to aid user interaction within a physical environment in which the user is disposed while viewing a virtual or augmented reality environment. In another example, techniques are described to support a world relative field of view and a fixed relative field of view. The world relative field of view is configured to follow motion of the user (e.g., movement of the user's head or mobile phone) within the environment to support navigation to different locations within the environment. The fixed relative field of view is configured to remain fixed during this navigation such that digital content disposed in this field of view remains at that relative location to a user's field of view.
Abstract:
One embodiment involves receiving, by a web page authoring tool, presentation information in a markup language corresponding to a static graphical object. In this embodiment, the web page authoring tool receives animation information in a data interchange format corresponding to an adjustment for the static graphical object. In this embodiment, the web page authoring tool receives a runtime engine. In this embodiment, the web page authoring tool stores the presentation information, the animation information, and the runtime engine within a web page. The runtime engine may be configured to cause a web browser displaying the web page to render an animation. The animation can be based at least in part on the presentation information and the animation information.
Abstract:
Digital content interaction and navigation techniques and systems in virtual and augmented reality are described. In one example, techniques are employed to aid user interaction within a physical environment in which the user is disposed while viewing a virtual or augmented reality environment. In another example, techniques are described to support a world relative field of view and a fixed relative field of view. The world relative field of view is configured to follow motion of the user (e.g., movement of the user's head or mobile phone) within the environment to support navigation to different locations within the environment. The fixed relative field of view is configured to remain fixed during this navigation such that digital content disposed in this field of view remains at that relative location to a user's field of view.