Abstract:
The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
Abstract:
Methods for providing real-time feedback to an end user of a mobile device as they are interacting with or manipulating one or more virtual objects within an augmented reality environment are described. The real-time feedback may comprise visual feedback, audio feedback, and/or haptic feedback. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may determine an object classification associated with a virtual object within an augmented reality environment, detect an object manipulation gesture performed by an end user of the mobile device, detect an interaction with the virtual object based on the object manipulation gesture, determine a magnitude of a virtual force associated with the interaction, and provide real-time feedback to the end user of the mobile device based on the interaction, the magnitude of the virtual force applied to the virtual object, and the object classification associated with the virtual object.
Abstract:
Methods for controlling the display of content as the content is being viewed by an end user of a head-mounted display device (HMD) are described. In some embodiments, an HMD may display the content using a virtual content reader for reading the content. The content may comprise text and/or images, such as text or images associated with an electronic book, an electronic magazine, a word processing document, a webpage, or an email. The virtual content reader may provide automated content scrolling based on a rate at which the end user reads a portion of the displayed content on the virtual content reader. In one embodiment, an HMD may combine automatic scrolling of content displayed on the virtual content reader with user controlled scrolling (e.g., via head tracking of the end user of the HMD).
Abstract:
The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
Abstract:
Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors of the NED system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
Abstract:
Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors of the NED system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
Abstract:
A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment.
Abstract:
A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment.
Abstract:
Methods for enabling hands-free selection of virtual objects are described. In some embodiments, a gaze swipe gesture may be used to select a virtual object. The gaze swipe gesture may involve an end user of a head-mounted display device (HMD) performing head movements that are tracked by the HMD to detect whether a virtual pointer controlled by the end user has swiped across two or more edges of the virtual object. In some cases, the gaze swipe gesture may comprise the end user using their head movements to move the virtual pointer through two edges of the virtual object while the end user gazes at the virtual object. In response to detecting the gaze swipe gesture, the HMD may determine a second virtual object to be displayed on the HMD based on a speed of the gaze swipe gesture and a size of the virtual object.
Abstract:
Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors on the near-eye display (NED) system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.