Abstract:
A method includes determining, at a first time, a representation of a first head rotation of a head mounted display (HMD) using a first inertial sensor sample stream and rendering, at an application processor, a texture based on the first head rotation. The method further includes determining, at a second time subsequent to the first time, a representation of a second head rotation of the HMD using a second inertial sensor sample stream having a higher sampling rate than the first inertial sensor sample stream, and generating, at a compositor, a rotated representation of the texture based on a difference between the first head rotation and the second head rotation.
Abstract:
A method includes determining, at a first time, a representation of a first head rotation of a head mounted display (HMD) using a first inertial sensor sample stream and rendering, at an application processor, a texture based on the first head rotation. The method further includes determining, at a second time subsequent to the first time, a representation of a second head rotation of the HMD using a second inertial sensor sample stream having a higher sampling rate than the first inertial sensor sample stream, and generating, at a compositor, a rotated representation of the texture based on a difference between the first head rotation and the second head rotation.
Abstract:
In a system for teleporting and scaling in a virtual reality environment, a user may teleport from a first virtual location, being experienced at a first scale, to a second virtual location, to be experienced at a second scale. The user may select the new, second virtual location and the new, second scale with a single external input via a handheld electronic device so that, upon release of a triggering action of the electronic device, the user may teleport to the newly selected second virtual location at the newly selected scale.
Abstract:
A system includes a head mounted display (HMD) device comprising at least one display and at least one sensor to provide pose information for the HMD device. The system further includes a sensor integrator module coupled to the at least one sensor, the sensor integrator module to determine a motion vector for the HMD device based on the pose information, and an application processor to render a first texture based on pose of the HMD device determined from the pose information. The system further includes a motion analysis module to determine a first velocity field having a pixel velocity for at least a subset of pixels of the first texture, and a compositor to render a second texture based on the first texture, the first velocity field and the motion vector for the HMD, and to provide the second texture to the display of the HMD device.
Abstract:
In a system for moving, or dragging, a virtual reality environment, a user wearing a head mounted display (HMD) device may be at a first physical position in a physical space, corresponding to a first virtual position in the virtual environment. The user may select a second virtual position in the virtual environment by, for example, manipulation of a handheld electronic device operably coupled to the HMD. The system may construct a three dimensional complex proxy surface based on the first and second virtual positions, and may move the virtual elements of the virtual environment along the proxy surface. This movement of the virtual environment may be perceived by the user as a move from the first virtual position to the second virtual position, although the user may remain at the first physical position within the physical space.
Abstract:
Aspects of the disclosure relate to rendering three-dimensional (3D) models to increase visual palatability. One or more computing devices may render an image of a 3D model. This rendering may actually occur in one or more stages. At an interim stage, the one or more computing devices determine an error value for a rendering of a partially-loaded version of the image. The error value is compared to a threshold. Based on the comparison, the one or more computing device generates an at least partially blurred rendering based at least in part on the rendering of the partially-loaded version of the image. The one or more computing devices provide the at least partially blurred rendering and subsequently provide for display a completely loaded version of the image.
Abstract:
In a system for moving, or dragging, a virtual reality environment, a user wearing a head mounted display (HMD) device may be at a first physical position in a physical space, corresponding to a first virtual position in the virtual environment. The user may select a second virtual position in the virtual environment by, for example, manipulation of a handheld electronic device operably coupled to the HMD. The system may construct a three dimensional complex proxy surface based on the first and second virtual positions, and may move the virtual elements of the virtual environment along the proxy surface. This movement of the virtual environment may be perceived by the user as a move from the first virtual position to the second virtual position, although the user may remain at the first physical position within the physical space.
Abstract:
Aspects of the disclosure relate generally to providing a user with an image navigation experience. For instance, a first image of a multidimensional space is provided with an overlay line indicating a direction in which the space extends into the first image such that a second image is connected to the first image along a direction of the overlay line. User input indicating a swipe across a portion of the display is received. When swipe occurred at least partially within an interaction zone defining an area around the overlay line at which the user can interact with the space, the swipe indicates a request to display an image different from the first image. The second image is selected and provided for display based on the swipe and a connection graph connecting the first image and the second image along the direction of the overlay line.
Abstract:
In a system having a user-portable display device, a method includes maintaining a lightfield data structure representing at least a portion of a four-dimensional (4D) lightfield for a three-dimensional (3D) world in association with a first pose of the user-portable display device relative to the 3D world. The method further includes determining a second pose of the user-portable display device relative to the 3D world, the second pose comprising an updated pose of the user-portable display device. The method additionally includes generating a display frame from the lightfield data structure based on the second pose, the display frame representing a field of view of the 3D world from the second pose.
Abstract:
In one aspect, computing device(s) may determine a plurality of fragments for a three-dimensional (3D) model of a geographical location. Each fragment of the plurality of fragments may correspond to a pixel of a blended image and each fragment has a fragment color from the 3D model. The one or more computing devices may determine geospatial location data for each fragment based at least in part on latitude information, longitude information, and altitude information associated with the 3D model. For each fragment of the plurality of fragments, the one or more computing devices may identify a pixel color and an image based at least in part on the geospatial location data, determine a blending ratio based on at least one of a position and an orientation of a virtual camera, and generate the blended image based on at least the blending ratio, the pixel color, and the fragment color.