Abstract:
Methods, apparatus, and computer programs for controlling a view of a virtual scene with a handheld device are presented. In one method, images of a real world scene are captured using a device. The method further includes operations for creating an augmented view for presentation on a display of the device by augmenting the images with virtual reality objects, and for detecting a hand in the images as extending into the real world scene. In addition, the method includes operations for showing the hand in the screen as detected in the images, and for generating interaction data, based on an interaction of the hand with a virtual reality object, when the hand makes virtual contact in the augmented view with the virtual reality object. The augmented view is updated based on the interaction data, which simulates on the screen that the hand is interacting with the virtual reality object.
Abstract:
Methods, apparatus, and computer programs for controlling a view of a virtual scene with a handheld device are presented. In one method, images of a real world scene are captured using a device. The method further includes operations for creating an augmented view for presentation on a display of the device by augmenting the images with virtual reality objects, and for detecting a hand in the images as extending into the real world scene. In addition, the method includes operations for showing the hand in the screen as detected in the images, and for generating interaction data, based on an interaction of the hand with a virtual reality object, when the hand makes virtual contact in the augmented view with the virtual reality object. The augmented view is updated based on the interaction data, which simulates on the screen that the hand is interacting with the virtual reality object.
Abstract:
A user interface evolves based on learned idiosyncrasies and collected data of a user. Learned idiosyncrasies and collected data of the user can be stored in a knowledge base. Information from the surrounding environment of the user can be obtained during learning of idiosyncrasies or collection of data. Thought-based statements can be generated based at least in part on the knowledge base and the information from the environment surrounding the user during learning of idiosyncrasies or collection of data. The thought-based statements serve to invoke or respond to subsequent actions of the user. The user interface can be presented so as to allow for interaction with the user based at least in part on the thought-based statements. Furthermore, personality nuances of the user interface can be developed that affect the interaction between the user and the user interface.
Abstract:
Methods, systems, and computer programs for generating an interactive space, viewable through at least a first and a second handheld devices, are presented. The method includes an operation for taking an image with a camera in the first device. In addition, the method includes an operation for determining a relative position of the second device with reference to the first device, based on image analysis of the taken image to identify a geometry of the second device. Furthermore, the method includes operations for identifying a reference point in a three-dimensional (3D) space based on the relative position, and for generating views of an interactive scene in corresponding displays of the first device and the second device. The interactive scene is tied to the reference point and includes virtual objects, and each view shows all or part of the interactive scene as observed from a current location of the corresponding device.
Abstract:
Methods, systems, and computer programs are provided for generating an interactive space. One method includes operations for associating a first device to a reference point in 3D space, and for calculating by the first device a position of the first device in the 3D space based on inertial information captured by the first device and utilizing dead reckoning. Further, the method includes operations for capturing images with a camera of the first device, and for identifying locations of one or more static features in the images. The position of the first device is corrected based on the identified locations of the one or more static features, and a view of an interactive scene is presented in a display of the first device, where the interactive scene is tied to the reference point and includes virtual objects.
Abstract:
Methods, apparatus, and computer programs for controlling a view of a virtual scene with a handheld device are presented. In one method, images of a real world scene are captured using a device. The method further includes operations for creating an augmented view for presentation on a display of the device by augmenting the images with virtual reality objects, and for detecting a hand in the images as extending into the real world scene. In addition, the method includes operations for showing the hand in the screen as detected in the images, and for generating interaction data, based on an interaction of the hand with a virtual reality object, when the hand makes virtual contact in the augmented view with the virtual reality object. The augmented view is updated based on the interaction data, which simulates on the screen that the hand is interacting with the virtual reality object.
Abstract:
Methods, systems, and computer programs are provided for generating an interactive space. One method includes operations for associating a first device to a reference point in 3D space, and for calculating by the first device a position of the first device in the 3D space based on inertial information captured by the first device and utilizing dead reckoning. Further, the method includes operations for capturing images with a camera of the first device, and for identifying locations of one or more static features in the images. The position of the first device is corrected based on the identified locations of the one or more static features, and a view of an interactive scene is presented in a display of the first device, where the interactive scene is tied to the reference point and includes virtual objects.
Abstract:
A user interface evolves based on learned idiosyncrasies and collected data of a user. Learned idiosyncrasies and collected data of the user can be stored in a knowledge base. Information from the surrounding environment of the user can be obtained during learning of idiosyncrasies or collection of data. Thought-based statements can be generated based at least in part on the knowledge base and the information from the environment surrounding the user during learning of idiosyncrasies or collection of data. The thought-based statements serve to invoke or respond to subsequent actions of the user. The user interface can be presented so as to allow for interaction with the user based at least in part on the thought-based statements. Furthermore, personality nuances of the user interface can be developed that affect the interaction between the user and the user interface.
Abstract:
A user interface evolves based on learned idiosyncrasies and collected data of a user. Learned idiosyncrasies and collected data of the user can be stored in a knowledge base. Information from the surrounding environment of the user can be obtained during learning of idiosyncrasies or collection of data. Thought-based statements can be generated based at least in part on the knowledge base and the information from the environment surrounding the user during learning of idiosyncrasies or collection of data. The thought-based statements serve to invoke or respond to subsequent actions of the user. The user interface can be presented so as to allow for interaction with the user based at least in part on the thought-based statements. Furthermore, personality nuances of the user interface can be developed that affect the interaction between the user and the user interface.
Abstract:
Methods, systems, and computer programs for generating an interactive space, viewable through at least a first and a second handheld devices, are presented. The method includes an operation for taking an image with a camera in the first device. In addition, the method includes an operation for determining a relative position of the second device with reference to the first device, based on image analysis of the taken image to identify a geometry of the second device. Furthermore, the method includes operations for identifying a reference point in a three-dimensional (3D) space based on the relative position, and for generating views of an interactive scene in corresponding displays of the first device and the second device. The interactive scene is tied to the reference point and includes virtual objects, and each view shows all or part of the interactive scene as observed from a current location of the corresponding device.