Abstract:
A virtual eyeglass set may include a frame, a first virtual lens and second virtual lens, and a processor. The frame may mount onto a user's head and hold the first virtual lens in front of the user's left eye and the second virtual lens in front of the user's right eye. A first side of each lens may face the user and a second side of each lens may face away from the user. Each of the first virtual lens and the second virtual lens may include a light field display on the first side, and a light field camera on the second side. The processor may construct, for display on each of the light field displays based on image data received via each of the light field cameras, an image from a perspective of the user's respective eye.
Abstract:
A system for detecting and tracking a hover position of a manual pointing device, such as finger(s), on a handheld electronic device may include overlaying a rendered mono-chromatic keying screen, or green screen, on a user interface, such as a keyboard, of the handheld electronic device. A position of the finger(s) relative to the keyboard may be determined based on the detection of the finger(s) on the green screen and a known arrangement of the keyboard. An image of the keyboard and the position of the finger(s) may be rendered and displayed, for example, on a head mounted display, to facilitate user interaction via the keyboard with a virtual immersive experience generated by the head mounted display.
Abstract:
In a system for moving, or dragging, a virtual reality environment, a user wearing a head mounted display (HMD) device may be at a first physical position in a physical space, corresponding to a first virtual position in the virtual environment. The user may select a second virtual position in the virtual environment by, for example, manipulation of a handheld electronic device operably coupled to the HMD. The system may construct a three dimensional complex proxy surface based on the first and second virtual positions, and may move the virtual elements of the virtual environment along the proxy surface. This movement of the virtual environment may be perceived by the user as a move from the first virtual position to the second virtual position, although the user may remain at the first physical position within the physical space.
Abstract:
A computer-implemented method for dynamically adjusting rendering parameters based on user movements may include determining viewpoint movement data for a user viewing a rendering of a 3D model at a first time, determining a first level-of-detail at which to render the 3D model based at least in part on the viewpoint movement data at the first time and rendering the 3D model at the first level-of-detail. The method may also include determining viewpoint movement data for the user at a second time, wherein the viewpoint movement data at the second time differs from the viewpoint movement data at the first time. In addition, the method may include determining a second level-of-detail at which to render the 3D model based at least in part on the viewpoint movement data at the second time and rendering the 3D model at the second level-of-detail, wherein the second level-of-detail differs from the first level-of-detail.
Abstract:
In a system for moving, or dragging, a virtual reality environment, a user wearing a head mounted display (HMD) device may be at a first physical position in a physical space, corresponding to a first virtual position in the virtual environment. The user may select a second virtual position in the virtual environment by, for example, manipulation of a handheld electronic device operably coupled to the HMD. The system may construct a three dimensional complex proxy surface based on the first and second virtual positions, and may move the virtual elements of the virtual environment along the proxy surface. This movement of the virtual environment may be perceived by the user as a move from the first virtual position to the second virtual position, although the user may remain at the first physical position within the physical space.
Abstract:
In a system having a user-portable display device, a method includes maintaining a lightfield data structure representing at least a portion of a four-dimensional (4D) lightfield for a three-dimensional (3D) world in association with a first pose of the user-portable display device relative to the 3D world. The method further includes determining a second pose of the user-portable display device relative to the 3D world, the second pose comprising an updated pose of the user-portable display device. The method additionally includes generating a display frame from the lightfield data structure based on the second pose, the display frame representing a field of view of the 3D world from the second pose.
Abstract:
Systems and methods to transition between viewpoints in a three-dimensional environment are provided. One example method includes obtaining data indicative of an origin position and a destination position of a virtual camera. The method includes determining a distance between the origin position and the destination position of the virtual camera. The method includes determining a peak visible distance based at least in part on the distance between the origin position and the destination position of the virtual camera. The method includes identifying a peak position at which the viewpoint of the virtual camera corresponds to the peak visible distance. The method includes determining a parabolic camera trajectory that traverses the origin position, the peak position, and the destination position. The method includes transitioning the virtual camera from the origin position to the destination position along the parabolic camera trajectory. An example system includes a user computing device and a geographic information system.
Abstract:
A computer-implemented method for dynamically adjusting rendering parameters based on user movements may include determining viewpoint movement data for a user viewing a rendering of a 3D model at a first time, determining a first level-of-detail at which to render the 3D model based at least in part on the viewpoint movement data at the first time and rendering the 3D model at the first level-of-detail. The method may also include determining viewpoint movement data for the user at a second time, wherein the viewpoint movement data at the second time differs from the viewpoint movement data at the first time. In addition, the method may include determining a second level-of-detail at which to render the 3D model based at least in part on the viewpoint movement data at the second time and rendering the 3D model at the second level-of-detail, wherein the second level-of-detail differs from the first level-of-detail.
Abstract:
In a control system for navigating in a virtual reality environment, a user may select a virtual feature in the virtual environment, and set an anchor point on the selected feature. The user may then move, or adjust position, relative to the feature, and/or move and/or scale the feature in the virtual environment, maintaining the portions of the feature at the set anchor point within the user's field of view of the virtual environment.
Abstract:
In a system for moving and scaling in a virtual reality environment, a user may a move from a first virtual position in the virtual environment toward a selected feature at a second virtual position in the virtual environment. While moving from the first position toward the second position, a user's scale, or perspective, relative to the user's surroundings in the virtual environment, may be adjusted via manipulation of a user interface provided on a handheld electronic device.