Abstract:
A method comprising acquiring depth image data, processing the image data into real-time three-dimensional (3D) reconstructed models of the environment, manipulating the models, textures, and images over a set of data, rendering the modified result for display and supporting interaction with the display based on existing spatial and physical skills
Abstract:
An apparatus and method for hybrid rendering. For example, one embodiment of a method comprises: identifying left and right views of a user's eyes; generating at least one depth map for the left and right views; calculating depth clamping thresholds including a minimum depth value and a maximum depth value; transforming the depth map in accordance with the minimum depth value and maximum depth value; and performing view synthesis to render left and right views using the transformed depth map.
Abstract:
Technologies for displaying graphical elements on a graphical user interface include a wearable computing device to generate a captured image. The wearable computing device analyzes the captured image to generate image location metric data for one or more prospective locations on the graphical user interface at which to place a graphical element. The image location metric data indicates one or more image characteristics of the corresponding prospective location. The wearable computing device determines appropriateness data for each prospective location based on the corresponding image location metric data. The appropriate data indicates a relative suitability of the corresponding location for display of the graphical element. The wearable computing device selects one of the prospective locations based on the appropriateness data and displays the graphical element on the graphical user interface at the selected prospective location.