Abstract:
The present disclosure relates to techniques for capturing and displaying partial motion in VAR scenes. VAR scenes can include a plurality of images combined and oriented over any suitable geometry. Although VAR scenes may provide an immersive view of a static scene, current systems do not generally support VAR scenes that include dynamic content (e.g., content that varies over time). Embodiments of the present invention can capture, generate, and/or share VAR scenes. This immersive, yet static, view of the VAR scene lacks dynamic content (e.g., content which varies over time). Embodiments of the present invention can efficiently add dynamic content to the VAR scene, allowing VAR scenes including dynamic content to be uploaded, shared, or otherwise transmitted without prohibitive resource requirements. Dynamic content can be captured by device and combined with a preexisting or simultaneously captured VAR scene, and the dynamic content may be played back upon selection.
Abstract:
A preferred method of acquiring virtual or augmented reality (VAR) scenes can include at a plurality of locations of interest, providing one or more users with a predetermined pattern for image acquisition with an image capture device and for each of the one or more users, in response to a user input, acquiring at least one image at the location of interest. The method of the preferred embodiment can also include for each of the one or more users, in response to the acquisition of at least one image, providing the user with feedback to ensure a complete acquisition of the virtual or augmented reality scene; and receiving at a remote database, from each of the one or more users, one or more VAR scenes. One variation of the method of the preferred embodiment can include providing game mechanics to promote proper image acquisition and promote competition between users.
Abstract:
One variation of a method for dynamically displaying multiple virtual and augmented reality scenes on a single display includes determining a set of global transform parameters from a combination of user-defined inputs, user-measured inputs, and device orientation and position derived from sensor inputs; calculating a projection from a configurable function of the global transform parameters, context provided by the user and context specific to a virtual and augmented reality scene; rendering a virtual and augmented reality scene with the calculated projection on a subframe of the display; and repeating the previous two steps to render additional virtual and augmented reality scenes.
Abstract:
The present disclosure relates to techniques for capturing and displaying partial motion in VAR scenes. VAR scenes can include a plurality of images combined and oriented over any suitable geometry. Although VAR scenes may provide an immersive view of a static scene, current systems do not generally support VAR scenes that include dynamic content (e.g., content that varies over time). Embodiments of the present invention can capture, generate, and/or share VAR scenes. This immersive, yet static, view of the VAR scene lacks dynamic content (e.g., content which varies over time). Embodiments of the present invention can efficiently add dynamic content to the VAR scene, allowing VAR scenes including dynamic content to be uploaded, shared, or otherwise transmitted without prohibitive resource requirements. Dynamic content can be captured by device and combined with a preexisting or simultaneously captured VAR scene, and the dynamic content may be played back upon selection.
Abstract:
A preferred method for dynamically displaying virtual and augmented reality scenes can include determining input parameters, calculating virtual photometric parameters, and rendering a VAR scene with a set of simulated photometric parameters.
Abstract:
The present disclosure relates to techniques for capturing and displaying partial motion in VAR scenes. VAR scenes can include a plurality of images combined and oriented over any suitable geometry. Although VAR scenes may provide an immersive view of a static scene, current systems do not generally support VAR scenes that include dynamic content (e.g., content that varies over time). Embodiments of the present invention can capture, generate, and/or share VAR scenes. This immersive, yet static, view of the VAR scene lacks dynamic content (e.g., content which varies over time). Embodiments of the present invention can efficiently add dynamic content to the VAR scene, allowing VAR scenes including dynamic content to be uploaded, shared, or otherwise transmitted without prohibitive resource requirements. Dynamic content can be captured by device and combined with a preexisting or simultaneously captured VAR scene, and the dynamic content may be played back upon selection.
Abstract:
A preferred method for dynamically displaying virtual and augmented reality scenes can include determining input parameters, calculating virtual photometric parameters, and rendering a VAR scene with a set of simulated photometric parameters.
Abstract:
A preferred method of acquiring virtual or augmented reality (VAR) scenes can include at a plurality of locations of interest, providing one or more users with a predetermined pattern for image acquisition with an image capture device and for each of the one or more users, in response to a user input, acquiring at least one image at the location of interest. The method of the preferred embodiment can also include for each of the one or more users, in response to the acquisition of at least one image, providing the user with feedback to ensure a complete acquisition of the virtual or augmented reality scene; and receiving at a remote database, from each of the one or more users, one or more VAR scenes. One variation of the method of the preferred embodiment can include providing game mechanics to promote proper image acquisition and promote competition between users.
Abstract:
The present disclosure relates to techniques for capturing and displaying partial motion in VAR scenes. VAR scenes can include a plurality of images combined and oriented over any suitable geometry. Although VAR scenes may provide an immersive view of a static scene, current systems do not generally support VAR scenes that include dynamic content (e.g., content that varies over time). Embodiments of the present invention can capture, generate, and/or share VAR scenes. This immersive, yet static, view of the VAR scene lacks dynamic content (e.g., content which varies over time). Embodiments of the present invention can efficiently add dynamic content to the VAR scene, allowing VAR scenes including dynamic content to be uploaded, shared, or otherwise transmitted without prohibitive resource requirements. Dynamic content can be captured by device and combined with a preexisting or simultaneously captured VAR scene, and the dynamic content may be played back upon selection.
Abstract:
A preferred method for sharing user-generated virtual and augmented reality scenes can include receiving at a server a virtual and/or augmented reality (VAR) scene generated by a user mobile device. Preferably, the VAR scene includes visual data and orientation data, which includes a real orientation of the user mobile device relative to a projection matrix. The preferred method can also include compositing the visual data and the orientation data into a viewable VAR scene; locally storing the viewable VAR scene at the server; and in response to a request received at the server, distributing the processed VAR scene to a viewer mobile device.