摘要:
The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.
摘要:
The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.
摘要:
Technology is described for providing a personalized sport performance experience with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. A physical movement recommendation is determined for the user performing a sport based on skills data for the user for the sport, physical characteristics of the user, and 3D space positions for at least one or more sport objects. 3D virtual data depicting one or more visual guides for assisting the user in performing the physical movement recommendation may be displayed from a user perspective associated with a display field of view of the near-eye AR display. An avatar may also be displayed by the near-eye AR display performing a sport. The avatar may perform the sport interactively with the user or be displayed performing a prior performance of an individual represented by the avatar.
摘要:
A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction.
摘要:
Technology is disclosed for enhancing the experience of a user wearing a see-through, near eye mixed reality display device. Based on an arrangement of gaze detection elements on each display optical system for each eye of the display device, a respective gaze vector is determined and a current user focal region is determined based on the gaze vectors. Virtual objects are displayed at their respective focal regions in a user field of view for a natural sight view. Additionally, one or more objects of interest to a user may be identified. The identification may be based on a user intent to interact with the object. For example, the intent may be determined based on a gaze duration. Augmented content may be projected over or next to an object, real or virtual. Additionally, a real or virtual object intended for interaction may be zoomed in or out.
摘要:
A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system and further, for recognizing and tracking only some skeletal joints, such as for example a user's upper body. The system may include a limb identification engine which may use various methods to evaluate, identify and track positions of body parts of one or more users in a scene. In examples, further processing efficiency may be achieved by segmenting the field of view in smaller zones, and focusing on one zone at a time. Moreover, each zone may have its own set of predefined gestures which are recognized.
摘要:
A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system and further, for recognizing and tracking only some skeletal joints, such as for example a user's upper body. The system may include a limb identification engine which may use various methods to evaluate, identify and track positions of body parts of one or more users in a scene. In examples, further processing efficiency may be achieved by segmenting the field of view in smaller zones, and focusing on one zone at a time. Moreover, each zone may have its own set of predefined gestures which are recognized.
摘要:
A system and method are disclosed for synthesizing information received from multiple audio and visual sources focused on a single scene. The system may determine the positions of capture devices based on a common set of cues identified in the image data of the capture devices. As a scene may often have users and objects moving into and out of the scene, data from the multiple capture devices may be time synchronized to ensure that data from the audio and visual sources are providing data of the same scene at the same time. Audio and/or visual data from the multiple sources may be reconciled and assimilated together to improve an ability of the system to interpret audio and/or visual aspects from the scene.
摘要:
Technology is described for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. The personal A/V apparatus is identified as being within the physical location, and one or more objects in a display field of view of the near-eye, augmented reality display are automatically identified based on a three dimensional mapping of objects in the physical location. User input, which may be natural user interface (NUI) input, indicates a previous time period, and one or more 3D virtual objects associated with the previous time period are displayed from a user perspective associated with the display field of view. An object may be erased from the display field of view, and a camera effect may be applied when changing between display fields of view.
摘要:
Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user.