Abstract:
One disclosed example provides a head-mounted device configured to control a plurality of light sources of a handheld object and acquire image data comprising a sequence of environmental tracking exposures in which the plurality of light sources are controlled to have a lower integrated intensity and handheld object tracking exposures in which the plurality of light sources are controlled to have a higher integrated intensity. The instructions are further executable to detect, via an environmental tracking exposure, one or more features of the surrounding environment, determine a pose of the head-mounted device based upon the one or more features of the surrounding environment detected, detect via a handheld object tracking exposure the plurality of light sources of the handheld object, determine a pose of the handheld object relative to the head-mounted device based upon the plurality of light sources detected, and output the pose of the handheld object.
Abstract:
A near-eye display device comprises right and left display projectors, expansion optics, and inertial measurement units (IMUs), in addition to a plurality of angle-sensitive pixel (ASP) elements and a computer. The right and left expansion optics are configured to receive respective display images from the right and left display projectors and to release expanded forms of the display images. The right IMU is fixedly coupled to the right display projector, and the left IMU is fixedly coupled to the left display projector. Each ASP element is responsive to an angle of light of one of the respective display images as received into the right or left expansion optic. The computer is configured to receive output from the right IMU, the left IMU and the plurality of ASP elements, and render display data for the right and left display projectors based in part on the output.
Abstract:
One disclosed example provides a head-mounted device including a stereo camera arrangement, a logic device configured to execute instructions, and a storage device storing instructions executable by the logic device to, for each camera in the stereo camera arrangement, receive image data of a field of view of the camera, detect light sources of a handheld object in the image data, and based upon the light sources detected, determine a pose of the handheld object. The instructions are executable to, based upon the pose of the handheld object determined for each camera in the stereo camera arrangement, calibrate the stereo camera arrangement.
Abstract:
A display system includes a display alignment tracker configured track the position of a first signal in a first waveguide and the position of a second signal in a second waveguide. The display alignment tracker optically multiplexes a portion of a first signal and a portion of the second signal into a combined optical signal and measures a differential between the first signal and the second signal. The differential is used to adjust the position, dimensions, or a color attribute of the first signal relative to the second signal.
Abstract:
Examples are disclosed that relate to calibration data related to a determined alignment of sensors on a wearable display device. One example provides a wearable display device comprising a frame, a first sensor and a second sensor, one or more displays, a logic system, and a storage system. The storage system comprises calibration data related to a determined alignment of the sensors with the frame in a bent configuration and instructions executable by the logic system. The instructions are executable to obtain a first sensor data and a second sensor data respectfully from the first and second sensors, determine a distance from the wearable display device to a feature based at least upon the first and second sensor data using the calibration data, obtain a stereo image to display based upon the distance from the wearable display device to the feature, and output the stereo image via the displays.
Abstract:
A MEMS scanning device ("Device") includes at least (1) laser projector(s) controlled by a laser drive to project a laser beam, (2) MEMS scanning mirror(s) controlled by a MEMS drive to scan the laser beam to generate a raster scan, (3) a display configured to receive the raster scan, (4) a thermometer configured to detect a current temperature, (5) a display observing camera configured to capture an image of a predetermined area of the display, and (6) a computer-readable media that stores temperature model(s), each of which is custom-built using machine learning. The device uses the display observing camera to capture image(s) of predetermined pattern(s), which are then used to extract feature(s). The extracted feature(s) are compared with ideal feature(s) to identify a discrepancy. When the identified discrepancy is greater than a threshold, the temperature model(s) are updated accordingly.
Abstract:
A display system includes a display alignment tracker configured to track the position of a first signal and the position of a second signal. The display alignment tracker optically multiplexes a portion of a first signal and of the second signal into a combined optical signal and measures a differential between the first signal and the second signal. A system for rendering optical signals on a head-mounted display, comprising: a first waveguide configured to guide a first signal therethrough; a second waveguide configured to guide a second signal therethrough; an optical multiplexer in optical communication with the first and second waveguides and configured to combine at least a portion of the first signal and of the second signal; and an optical sensor in optical communication with the optical multiplexer and configured to receive a combined optical signal including at least a portion of the first and of the second signal.
Abstract:
A system for continuous image alignment of separate cameras identifies a reference camera transformation matrix between a base reference camera pose and an updated reference camera pose. The system also identifies a match camera transformation matrix between a base match camera pose and an updated match camera pose and an alignment matrix based on visual correspondences between one or more reference frames captured by the reference camera and one or more match frames captured by the match camera. The system also generates a motion model configured to facilitate mapping of a set of pixels of a reference frame captured by the reference camera to a corresponding set of pixels of a match frame captured by the match camera based on the reference camera transformation matrix, the match camera transformation matrix, and the alignment matrix.
Abstract:
One disclosed example provides a computing device configured to receive from an image sensor of a head-mounted device environmental tracking exposures and handheld object tracking exposures, determine a pose of the handheld object with respect to the head-mounted device based upon the handheld object tracking exposures, determine a pose of the head-mounted device with respect to a surrounding environment based upon the environmental tracking exposures, derive a pose of the handheld object relative to the surrounding environment based upon the pose of the handheld object with respect to the head-mounted device and the pose of the head-mounted device with respect to the surrounding environment, and output the pose of the handheld object relative to the surrounding environment for controlling a user interface displayed on the head-mounted device.
Abstract:
In embodiments of augmenting a moveable entity with a hologram, an alternate reality device (100) includes a tracking system (108) that can recognize an entity in an environment and track movement of the entity in the environment. The alternate reality device can also include a detection algorithm (128) implemented to identify the entity recognized by the tracking system based on identifiable characteristics of the entity. A hologram positioning application (124) is implemented to receive motion data from the tracking system, receive entity characteristic data from the detection algorithm, and determine a position and an orientation of the entity in the environment based on the motion data and the entity characteristic data. The hologram positioning application can then generate a hologram that appears associated with the entity as the entity moves in the environment.