Abstract:
In a virtual reality system, an optical tracking device may detect and track a user's eye gaze direction and/or movement, and/or sensors may detect and track a user's head gaze direction and/or movement, relative to virtual user interfaces displayed in a virtual environment. A processor may process the detected gaze direction and/or movement as a user input, and may translate the user input into a corresponding interaction in the virtual environment. Gaze directed swipes on a virtual keyboard displayed in the virtual environment may be detected and tracked, and translated into a corresponding text input, either alone or together with user input(s) received by the controller. The user may also interact with other types of virtual interfaces in the virtual environment using gaze direction and movement to provide an input, either alone or together with a controller input.
Abstract:
Systems, devices, methods, computer program products, and electronic apparatuses for aligning components in virtual reality environments are provided. An example method includes detecting a first input from a handheld controller of a virtual reality system, responsive to detecting the first input, instructing a user to orient a handheld controller in a designated direction, detecting a second input from the handheld controller; and responsive to detecting the second input, storing alignment data representative of an alignment of the handheld controller.
Abstract:
Embodiments may relate to intuitive user-interface features for a head-mountable device (HMD), in the context of a hybrid human and computer-automated response system. An illustrative method may involve a head-mountable device (HMD) that comprises a touchpad: (a) sending a speech-segment message to a hybrid response system, wherein the speech-segment message is indicative of a speech segment that is detected in audio data captured at the HMD, and wherein the speech-segment is associated with a first user-account with the hybrid response system, (b) receiving a response message that includes a response to the speech-segment message and an indication of a next action corresponding to the response to the speech-segment message, (c) displaying a card interface that includes an indication of the response, and (d) while displaying the response, detecting a singular touch gesture and responsively initiating the at least one next action.
Abstract:
Embodiments described herein may help to provide a wake-up mechanism for a computing device. An example method involves, the computing device: (a) receiving head-movement data that is indicative of head movement; (b) detecting at least a portion of the head-movement data that is indicative of a head gesture; (c) receiving eye-position data that is indicative of eye position; (d) detecting at least a portion of the eye-position data that is indicative of an eye being directed towards a display of a head-mounted device (HMD); and (e) causing the HMD to switch from a first operating mode to a second operating mode in response to the detection of both: (i) the eye-movement data that is indicative of an eye directed towards the display, and (ii) the head-movement data indicative of the head gesture.
Abstract:
A system for combining a gyromouse input with a touch surface input in an augmented reality (AR) environment and/or a virtual reality (VR) environment, a virtual display of virtual items and/or features may be adjusted in response to movement of the gyromouse combined with touch inputs, or touch and drag inputs, received on a touch surface of the gyromouse. Use of the gyromouse in the AR/VR environment may allow touch screen capabilities to be accurately projected into a three dimensional virtual space, providing a controller having improved functionality and utility in the AR/VR environment, and enhancing the user's experience.
Abstract:
A method for aligning an image on a mobile device disposed within a head-mounted display (HMD) housing includes: detecting a request to align an image on a touchscreen of a mobile device; detecting, on the touchscreen, a first detected location corresponding to a first touchscreen input event; determining a first displacement of the first detected location with respect to a first target location of the first touchscreen input event; and transposing the image on the touchscreen based on the first displacement. A virtual reality system includes: a mobile device having a touchscreen configured to display an image; and a HMD housing having a first contact configured to generate a first input event at a first location on the touchscreen when the mobile device is disposed within the HMD housing.
Abstract:
Embodiments described herein may help to provide a lock-screen for a computing device. An example method involves: (a) displaying two or more rows of characters and an input region that is moveable over the rows of characters, (b) based on head-movement data, determining movement of the input region with respect to the rows of characters, (c) determining an input sequence, where the sequence includes one character from each of the rows of characters that is selected based at least in part on the one or more movements of the input region with respect to the rows of characters, (d) determining whether or not the input sequence matches a predetermined unlock sequence, and (e) if the input sequence matches the predetermined unlock sequence, then unlocking the computing device.
Abstract:
An example method includes receiving, by a head-mountable device (HMD), data corresponding to an information event, and providing an indication corresponding to the information event in response to receiving the data. The method further includes determining a gaze direction of an eye and determining that the gaze direction of the eye is an upward direction that corresponds to a location of a display of the HMD. The display is located in an upper periphery of a forward-looking field of view of the eye when the HMD is worn. The method further includes, in response to determining that the gaze direction of the eye is the upward direction, displaying graphical content related to the information event in the display.
Abstract:
Methods and systems are described herein for providing text to a head-mountable display (HMD) from a remote device. The remote device can receive a notification of an event related to the HMD. The remote device can determine whether the event corresponds to a text input for the HMD. After determining that the event does corresponds to the text input, the remote device can: cause a display of a text-input interface on the HMD, receive text using a text-input component of the remote device, and send the text to the HMD.