GAZE DETECTION IN A SEE-THROUGH, NEAR-EYE, MIXED REALITY DISPLAY
    31.
    发明申请
    GAZE DETECTION IN A SEE-THROUGH, NEAR-EYE, MIXED REALITY DISPLAY 有权
    在一个看到眼睛,近眼睛,混合现实显示的GAZE检测

    公开(公告)号:US20130286178A1

    公开(公告)日:2013-10-31

    申请号:US13844453

    申请日:2013-03-15

    IPC分类号: G06F3/01

    摘要: The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.

    摘要翻译: 该技术提供了在透视,近眼,混合现实显示设备内的凝视确定的各种实施例。 在一些实施例中,可以从用户眼睛和注视检测元件(例如照明器)和位于诸如眼镜框架的支撑结构上的至少一个光传感器之间的空间关系来确定注视检测坐标系的边界。 注视检测坐标系允许基于用户眼睛上的闪烁的数据,或图像和闪光数据的组合来确定来自每只眼睛的注视矢量。 注视点可以在包括实际和虚拟对象的三维用户视野中确定。 可以检查注视检测元件和眼睛之间的空间关系,并且如果注视检测坐标系的边界已经改变,则可以触发训练数据集的重新校准。

    ENHANCING AN OBJECT OF INTEREST IN A SEE-THROUGH, MIXED REALITY DISPLAY DEVICE
    35.
    发明申请
    ENHANCING AN OBJECT OF INTEREST IN A SEE-THROUGH, MIXED REALITY DISPLAY DEVICE 有权
    在一个完美的混合现实显示设备中增强兴趣的对象

    公开(公告)号:US20130050432A1

    公开(公告)日:2013-02-28

    申请号:US13221770

    申请日:2011-08-30

    IPC分类号: G09G5/00 H04N13/02

    摘要: Technology is disclosed for enhancing the experience of a user wearing a see-through, near eye mixed reality display device. Based on an arrangement of gaze detection elements on each display optical system for each eye of the display device, a respective gaze vector is determined and a current user focal region is determined based on the gaze vectors. Virtual objects are displayed at their respective focal regions in a user field of view for a natural sight view. Additionally, one or more objects of interest to a user may be identified. The identification may be based on a user intent to interact with the object. For example, the intent may be determined based on a gaze duration. Augmented content may be projected over or next to an object, real or virtual. Additionally, a real or virtual object intended for interaction may be zoomed in or out.

    摘要翻译: 公开了用于增强佩戴透明近眼睛混合现实显示装置的用户体验的技术。 基于在显示装置的每个眼睛的每个显示光学系统上的注视检测元件的布置,确定相应的注视向量,并且基于注视向量来确定当前用户聚焦区域。 在用户视野中的虚拟对象在其各自的焦点区域处显示,用于自然视野。 另外,可以识别用户感兴趣的一个或多个对象。 识别可以基于用户与对象交互的意图。 例如,可以基于注视持续时间来确定意图。 增强的内容可以投射在对象的真实或虚拟的上方或旁边。 另外,可以放大或缩小用于交互的实际或虚拟对象。

    SYNTHESIS OF INFORMATION FROM MULTIPLE AUDIOVISUAL SOURCES
    38.
    发明申请
    SYNTHESIS OF INFORMATION FROM MULTIPLE AUDIOVISUAL SOURCES 有权
    从多个音频来源合成信息

    公开(公告)号:US20110300929A1

    公开(公告)日:2011-12-08

    申请号:US12792961

    申请日:2010-06-03

    IPC分类号: A63F13/00 G06K9/40 G06K9/36

    摘要: A system and method are disclosed for synthesizing information received from multiple audio and visual sources focused on a single scene. The system may determine the positions of capture devices based on a common set of cues identified in the image data of the capture devices. As a scene may often have users and objects moving into and out of the scene, data from the multiple capture devices may be time synchronized to ensure that data from the audio and visual sources are providing data of the same scene at the same time. Audio and/or visual data from the multiple sources may be reconciled and assimilated together to improve an ability of the system to interpret audio and/or visual aspects from the scene.

    摘要翻译: 公开了一种用于合成从聚焦在单个场景上的多个音频和视频源接收的信息的系统和方法。 系统可以基于在捕获设备的图像数据中识别的一组公共信号来确定捕获设备的位置。 由于场景通常可能使用户和对象进出场景,来自多个捕获设备的数据可能是时间同步的,以确保来自音频和视频源的数据同时提供相同场景的数据。 来自多个源的音频和/或视觉数据可以一起协调和同化,以提高系统从场景中解释音频和/或视觉方面的能力。