Abstract:
Gaze tracking data representing a viewer's gaze with respect to one or more images presented to the viewer is used to generate foveated image data representing one or more foveated images characterized by a higher level of detail within one or more regions of interest and a lower level of detail outside the regions of interest. The image data for portions outside the one or more regions of interest is selectively filtered to reduce visual artifacts due to contrast resulting from the lower level of detail before compositing foveated images for presentation.
Abstract:
Methods, systems, and computer programs for interfacing a user with a Graphical User Interface (GUI) are provided. One method includes operations for identifying a point of gaze (POG) of a user, and for detecting initiation of an action by the user to move a position of a selector for selecting objects presented on a graphical user interface (GUI). In addition, the method includes an operation for determining the distance between the current position of the selector and the POG. The displacement speed of the selector is adjusted based on the distance between the current position of the selector and the POG, where the displacement speed is reduced as the distance between the current position of the selector and the POG becomes smaller.
Abstract:
A method for tracking eye movement is provided. One embodiment of the method includes receiving a first measurement from a first sensor configured to detect a gaze location, determining an initial gaze location based at least on the first measurement, receiving at least one of eye motion amplitude and eye motion direction measurement from a second sensor, and determining an estimated gaze location based at least on the initial gaze location and the at least one of eye motion amplitude and eye motion direction. Systems perform similar steps, and non-transitory computer readable storage mediums each store one or more computer programs.
Abstract:
Gaze tracking data may representing a user's gaze with respect to one or more images transmitted to a user are analyzed to determine one or more regions of interest. Compression of the one or more transmitted images is adjusted so that fewer bits are needed to transmit data for portions of an image outside the one or more regions interest than for portions of the image within the one or more regions of interest.
Abstract:
Gaze tracking data may be analyzed to determine the onset and duration of a vision interrupting event, such as a blink or saccade. Presentation of images to a viewer may then be suspended during the vision interrupting event and resumed in sufficient time to ensure that the viewer sees the image at the time the vision interrupting event has concluded.
Abstract:
A method for executing computer instructions for presenting an interactive environment in a head-mounted display (HMD) is described. The method includes identifying content associated with the interactive environment to be presented on the HMD for a user and determining whether an interactive object within the identified content satisfies a threshold for presentation to the user. The method includes augmenting the interactive object with augmentation data. The augmented data acts to change a characteristic of the interactive object. The operation of augmenting the interactive object is performed after determining that the interactive object does not satisfy the threshold for presentation to the user. The augmentation data modifies the interactive object to conform the interactive object to be within the threshold.
Abstract:
Methods and systems are provided for warning a user of a head mounted display during gameplay of a video game. A game is executed causing interactive scenes of the game to be rendered on a display portion of a head mounted display (HMD) worn by a user. Images of a physical environment in a vicinity of the user are received from a forward facing camera of the HMD, while the game is being executed. The images are analyzed to determine if the user is approaching one or more objects in the physical environment. Data is sent to be rendered on the display portion of the HMD while rendering the interactive scenes of the game. The data provides information to avoid interaction with the one or more objects in the physical environment.
Abstract:
A method for presenting text information on a head-mounted display is provided, comprising: rendering a view of a virtual environment to the head-mounted display; tracking an orientation of the head-mounted display; tracking a gaze of a user of the head-mounted display; processing the gaze of the user and the orientation of the head-mounted display, to identify a gaze target in the virtual environment towards which the gaze of the user is directed; receiving text information for rendering on the head-mounted display; presenting the text information in the virtual environment in a vicinity of the gaze target.
Abstract:
A handheld device is provided, comprising: a sensor configured to generate sensor data for determining and tracking a position and orientation of the handheld device during an interactive session of an interactive application presented on a main display, the interactive session being defined for interactivity between a user and the interactive application; a communications module configured to send the sensor data to a computing device, the communications module being further configured to receive from the computing device a spectator video stream of the interactive session that is generated based on a state of the interactive application and the tracked position and orientation of the handheld device, the state of the interactive application being determined based on the interactivity between the user and the interactive application; a display configured to render the spectator video stream.
Abstract:
Methods, systems, and computer programs are presented for rendering images on a head mounted display (HMD). One method includes operations for tracking, with one or more first cameras inside the HMD, the gaze of a user and for tracking motion of the HMD. The motion of the HMD is tracked by analyzing images of the HMD taken with a second camera that is not in the HMD. Further, the method includes an operation for predicting the motion of the gaze of the user based on the gaze and the motion of the HMD. Rendering policies for a plurality of regions, defined on a view rendered by the HMD, are determined based on the predicted motion of the gaze. The images are rendered on the view based on the rendering policies.