Abstract:
Technologies are described herein for an eye tracking that may be employed by devices and systems such as head mount display (HMD) devices. Light that is reflected from a user's eye may be specular or scattered. The specular light has an intensity or magnitude that may saturate the electronics. The presently disclosed techniques mitigate saturation by generating detected signals from an optical detector, evaluating the signal levels for the detected signal, and selectively gating the detected signals that have saturated. The remaining scattered signals can be combined to achieve a combined signal that can be converted into a digital signal without saturating the electronics, which can then be processed to form an image of the eye for identification purposes, for tracking eye movement, and for other uses. The described technologies provide a clear image without ambient light reflections or specular light interfering with the image.
Abstract:
Motion vector estimation is provided for generating and displaying images at a frame rate that is greater than a rendering frame rate. The displayed images may include late stage graphical adjustments of pre-rendered scenes that incorporate motion vector estimations. A head-mounted display (HMD) device may determine a predicted pose associated with a future position and orientation of the HMD, render a current frame based on the predicted pose, determine a set of motion vectors based on the current frame and a previous frame, generate an updated image based on the set of motion vectors and the current frame, and display the updated image on the HMD. In one embodiment, the HMD may determine an updated pose associated with the HMD subsequent to or concurrent with generating the current frame, and generate the updated image based on the updated pose and the set of motion vectors.
Abstract:
The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
Abstract:
A display device includes a display substrate and a backplane substrate. The display substrate includes an array of micro-LEDs forming individual pixels. The backplane substrate includes a plurality of pixel logic hardware modules. Each pixel logic hardware module includes a local memory element configured to store a multi-bit pixel intensity value of a corresponding micro-LED for an image frame. The backplane substrate is bonded to a backside of the display substrate such that the pixel logic hardware modules are physically aligned behind the array of micro-LEDs and each pixel logic hardware module is electrically connected to a micro-LED of the corresponding pixel.
Abstract:
The techniques disclosed herein detect sensor misalignments in a display device by the use of sensors operating under different modalities. In some configurations, a near-to-eye display device can include a number of sensors that can be used to track movement of the device relative to a surrounding environment. The device can utilize multiple sensors operating under multiple modalities. For each sensor, there is a set of intrinsic and extrinsic properties that are calibrated. The device is also configured to determine refined estimations of the intrinsic and extrinsic properties at runtime. The refined estimations of the intrinsic and extrinsic properties can then be used to derive knowledge on how the device has deformed over time. The device can then use the refined estimations of the intrinsic and extrinsic properties and/or any other resulting data that quantifies any deformations of the device to make adjustments to rendered images at runtime.
Abstract:
A head-mounted display device including one or more position sensors and a processor. The processor may receive a rendered image of a current frame. The processor may receive position data from the one or more position sensors and determine an updated device pose based on the position data. The processor may apply a first spatial correction to color information in pixels of the rendered image at least in part by reprojecting the rendered image based on the updated device pose. The head-mounted display device may further include a display configured to apply a second spatial correction to the color information in the pixels of the rendered image at least in part by applying wobulation to the reprojected rendered image to thereby generate a sequence of wobulated pixel subframes for the current frame. The display may display the current frame by displaying the sequence of wobulated pixel subframes.
Abstract:
The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
Abstract:
Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image by circuitry within the display.
Abstract:
An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.
Abstract:
Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image by circuitry within the display.