Abstract:
A system for tracking an object in an ambient environment with respect to a head mounted reference frame may allow the ambient object to be rendered in a virtual display, at a virtual position corresponding to its position in the ambient environment, in response to head movement. The system may detect a position of a head mounted device with respect to a fixed frame of reference in the ambient environment, and may detect an position of the ambient object with respect to the fixed frame of reference in the ambient environment. The system may then translate the detected position of the ambient object to the frame of reference of the head mounted device, or to the head mounted reference frame, to determine a position of the ambient object relative to the head mounted device. This newly determined position may be rendered in the virtual display generated by the head mounted device.
Abstract:
A first digital map is displayed in a viewport at an initial position. When a user gesture that communicates motion to the viewport is detected, a trajectory of the viewport from the initial position to a target position is determined based on kinematic quantities of the communicated motion. Map data for displaying a second digital map in the viewport at the target position is retrieved from a first memory, prior to the viewport reaching the target position. The retrieved map data is stored in a second memory having a higher speed of access than the first memory. The second memory is retrieved for display via the user interface when the viewport is at the target position.
Abstract:
A system for tracking a first electronic device, such as a handheld smartphone, in a virtual reality environment generated by a second electronic device, such as a head mounted display may include detection, by a camera included in one of the first electronic device or the second electronic device, of at least one visual marker included on the other of the first electronic device or the second electronic device. Features detected within the field of view corresponding to known features of the visual markers may be used to locate and track movement of the first electronic device relative to the second electronic device, so that movement of the second electronic device may be translated into an interaction in a virtual experience generated by the second electronic device.
Abstract:
Systems and methods to transition between viewpoints in a three-dimensional environment are provided. One example method includes obtaining data indicative of an origin position and a destination position of a virtual camera. The method includes determining a distance between the origin position and the destination position of the virtual camera. The method includes determining a peak visible distance based at least in part on the distance between the origin position and the destination position of the virtual camera. The method includes identifying a peak position at which the viewpoint of the virtual camera corresponds to the peak visible distance. The method includes determining a parabolic camera trajectory that traverses the origin position, the peak position, and the destination position. The method includes transitioning the virtual camera from the origin position to the destination position along the parabolic camera trajectory. An example system includes a user computing device and a geographic information system.
Abstract:
A system for tracking a first electronic device, such as a handheld smartphone, in a virtual reality environment generated by a second electronic device, such as a head mounted display may include detection, by a camera included in one of the first electronic device or the second electronic device, of at least one visual marker included on the other of the first electronic device or the second electronic device. Features detected within the field of view corresponding to known features of the visual markers may be used to locate and track movement of the first electronic device relative to the second electronic device, so that movement of the second electronic device may be translated into an interaction in a virtual experience generated by the second electronic device.
Abstract:
A hover touch compensation system and method may detect and track a hover position of a pointing/selecting device, such as a user's finger, relative to an input surface of a user interface, and may detect a point at which the pointing/selecting device initiates a movement toward the input surface of the user interface. The system may identify an intended contact point on the user interface based on the hover position of the pointing/selecting device relative to the input surface of the user interface at the point at which the movement toward the user interface is detected.
Abstract:
A hover touch compensation system and method may detect and track a hover position of a pointing/selecting device, such as a user's finger, relative to an input surface of a user interface, and may detect a point at which the pointing/selecting device initiates a movement toward the input surface of the user interface. The system may identify an intended contact point on the user interface based on the hover position of the pointing/selecting device relative to the input surface of the user interface at the point at which the movement toward the user interface is detected.
Abstract:
Systems and methods to transition between viewpoints in a three-dimensional environment are provided. One example method includes obtaining data indicative of an origin position and a destination position of a virtual camera. The method includes determining a distance between the origin position and the destination position of the virtual camera. The method includes determining a peak visible distance based at least in part on the distance between the origin position and the destination position of the virtual camera. The method includes identifying a peak position at which the viewpoint of the virtual camera corresponds to the peak visible distance. The method includes determining a parabolic camera trajectory that traverses the origin position, the peak position, and the destination position. The method includes transitioning the virtual camera from the origin position to the destination position along the parabolic camera trajectory. An example system includes a user computing device and a geographic information system.
Abstract:
A first digital map is displayed in a viewport at an initial position. When a user gesture that communicates motion to the viewport is detected, a trajectory of the viewport from the initial position to a target position is determined based on kinematic quantities of the communicated motion. Map data for displaying a second digital map in the viewport at the target position is retrieved from a first memory, prior to the viewport reaching the target position. The retrieved map data is stored in a second memory having a higher speed of access than the first memory. The second memory is retrieved for display via the user interface when the viewport is at the target position.