Abstract:
Foveated rendering based on user gaze tracking may be adjusted to account for the realities of human vision. Gaze tracking error and state parameters may be determined from gaze tracking data representing a user's gaze with respect to one or more images presented to a user. Adjusted foveation data representing an adjusted size and/or shape of one or more regions of interest in one or more images to be subsequently presented to a user may be generated based on the one or more gaze tracking error or state parameters. Foveated image data representing one or more foveated images may be generated with the adjusted foveation data. The foveated images are characterized by level of detail within the one or more regions of interest and lower level of detail outside the one or more regions of interest. The foveated images may then be presented to the user.
Abstract:
A method for enhancing detection of a user's hand relative to a head-mounted display (HMD) is described. The method includes sensing a disrupted portion of energy by sensors integrated within a pad device. The disrupted portion of the energy is generated when the hand of the user interferes with the energy. The plurality of sensors that sense the disrupted portion of the energy produce an energy image that mirrors a current position of the hand. The method includes repeating the sensing continuously to produce a stream of energy images. The method includes communicating the stream of energy images to a game console for processing each of the energy images to produce a model of the hand and movement of the model of the hand. The model of the hand is at least partially rendered as a virtual hand in a virtual environment that is displayed in the HMD.
Abstract:
A method to identify positions of fingers of a hand is described. The method includes capturing images of a first hand using a plurality of cameras that are part of a wearable device. The wearable device is attached to a wrist of a second hand and the plurality of cameras of the wearable device is disposed around the wearable device. The method includes repeating capturing of additional images of the first hand, the images and the additional images captured to produce a stream of captured image data during a session of presenting the virtual environment in a head mounted display (HMD). The method includes sending the stream of captured image data to a computing device that is interfaced with the HMD. The computing device is configured to process the captured image data to identify changes in positions of the fingers of the first hand.
Abstract:
The disclosure provides methods and systems for warning a user of a head mounted display during gameplay of a video game. A game is executed causing interactive scenes of the game to be rendered on a display portion of a head mounted display (HMD) worn by a user. A change in position of the HMD worn by the user, while the user is interacting with the game, is detected. The change in position is evaluated, the evaluation causing a signal to be generated when the change exceeds a pre-defined threshold value. When the signal is generated, content is sent to interrupt the interactive scenes being rendered on the display portion of the HMD. The data sent provides descriptive context for the signal.
Abstract:
Methods, systems, and computer programs are presented for managing the display of images on a head mounted device (HMD). One method includes an operation for tracking the gaze of a user wearing the HMD, where the HMD is displaying a scene of a virtual world. In addition, the method includes an operation for detecting that the gaze of the user is fixed on a predetermined area for a predetermined amount of time. In response to the detecting, the method fades out a region of the display in the HMD, while maintaining the scene of the virtual world in an area of the display outside the region. Additionally, the method includes an operation for fading in a view of the real world in the region as if the HMD were transparent to the user while the user is looking through the region. The fading in of the view of the real world includes maintaining the scene of the virtual world outside the region.
Abstract:
A method for tracking eye movement is provided. One embodiment of the method includes receiving a first measurement from a first sensor configured to detect a gaze location, determining an initial gaze location based at least on the first measurement, receiving at least one of eye motion amplitude and eye motion direction measurement from a second sensor, and determining an estimated gaze location based at least on the initial gaze location and the at least one of eye motion amplitude and eye motion direction. Systems perform similar steps, and non-transitory computer readable storage mediums each store one or more computer programs.
Abstract:
Systems and methods include receiving an image for presenting on a display screen of a head mounted display (HMD). The image is provided by an application. The received image is pre-distorted to enable optics provided in a HMD to render the image. An alignment offset is identified for an eye of a user wearing the HMD by determining a position of the eye relative to an optical axis of at least one lens of the optics of the HMD. The pre-distorted image provided by the application is adjusted to define a corrected pre-distorted image that accounts for the alignment offset. The corrected pre-distorted image is forwarded to the display screen of the HMD for rendering, such that the image presented through the optics of the HMD removes aberrations caused by the alignment offset.
Abstract:
The disclosure provides methods and systems for warning a user of a head mounted display that the user approaches an edge of field of view of a camera or one or more tangible obstacles. The warning includes presenting audio and/or displayable messages to the user, or moving the display(s) of the head mounted displays away of the user's eyes. The determination that the user approaches the edge of scene or a tangible obstacle is made by dynamically tracking motions of the users through analysis of images and/or depth data obtained from image sensor(s) and/or depth sensor(s) secured to either the head mounted display, arranged outside of the scene and not secured to the head mounted display, or a combination of both.
Abstract:
A handheld device is provided, comprising: a sensor configured to generate sensor data for determining and tracking a position and orientation of the handheld device during an interactive session of an interactive application presented on a main display, the interactive session being defined for interactivity between a user and the interactive application; a communications module configured to send the sensor data to a computing device, the communications module being further configured to receive from the computing device a spectator video stream of the interactive session that is generated based on a state of the interactive application and the tracked position and orientation of the handheld device, the state of the interactive application being determined based on the interactivity between the user and the interactive application; a display configured to render the spectator video stream.
Abstract:
Systems and methods for operating a screen of a head mounted display includes executing a program. The execution of the program causes rendering of images on the screen of the HMD. The screen renders the images using a first optical setting. A first image is presented on the screen. The first image has a first size and is presented at a distance. Input is received identifying a clarity level for the first image. A second image is presented on the screen. The second image has a second size and the distance. Input is received identifying the clarity level for the second image. Based on the clarity level received for the first and the second images, the first optical setting for the screen is changed to a second optical setting.