ENHANCING AN OBJECT OF INTEREST IN A SEE-THROUGH, MIXED REALITY DISPLAY DEVICE
    53.
    发明申请
    ENHANCING AN OBJECT OF INTEREST IN A SEE-THROUGH, MIXED REALITY DISPLAY DEVICE 有权
    在一个完美的混合现实显示设备中增强兴趣的对象

    公开(公告)号:US20130050432A1

    公开(公告)日:2013-02-28

    申请号:US13221770

    申请日:2011-08-30

    IPC分类号: G09G5/00 H04N13/02

    摘要: Technology is disclosed for enhancing the experience of a user wearing a see-through, near eye mixed reality display device. Based on an arrangement of gaze detection elements on each display optical system for each eye of the display device, a respective gaze vector is determined and a current user focal region is determined based on the gaze vectors. Virtual objects are displayed at their respective focal regions in a user field of view for a natural sight view. Additionally, one or more objects of interest to a user may be identified. The identification may be based on a user intent to interact with the object. For example, the intent may be determined based on a gaze duration. Augmented content may be projected over or next to an object, real or virtual. Additionally, a real or virtual object intended for interaction may be zoomed in or out.

    摘要翻译: 公开了用于增强佩戴透明近眼睛混合现实显示装置的用户体验的技术。 基于在显示装置的每个眼睛的每个显示光学系统上的注视检测元件的布置,确定相应的注视向量,并且基于注视向量来确定当前用户聚焦区域。 在用户视野中的虚拟对象在其各自的焦点区域处显示,用于自然视野。 另外,可以识别用户感兴趣的一个或多个对象。 识别可以基于用户与对象交互的意图。 例如,可以基于注视持续时间来确定意图。 增强的内容可以投射在对象的真实或虚拟的上方或旁边。 另外,可以放大或缩小用于交互的实际或虚拟对象。

    SYNTHESIS OF INFORMATION FROM MULTIPLE AUDIOVISUAL SOURCES
    56.
    发明申请
    SYNTHESIS OF INFORMATION FROM MULTIPLE AUDIOVISUAL SOURCES 有权
    从多个音频来源合成信息

    公开(公告)号:US20110300929A1

    公开(公告)日:2011-12-08

    申请号:US12792961

    申请日:2010-06-03

    IPC分类号: A63F13/00 G06K9/40 G06K9/36

    摘要: A system and method are disclosed for synthesizing information received from multiple audio and visual sources focused on a single scene. The system may determine the positions of capture devices based on a common set of cues identified in the image data of the capture devices. As a scene may often have users and objects moving into and out of the scene, data from the multiple capture devices may be time synchronized to ensure that data from the audio and visual sources are providing data of the same scene at the same time. Audio and/or visual data from the multiple sources may be reconciled and assimilated together to improve an ability of the system to interpret audio and/or visual aspects from the scene.

    摘要翻译: 公开了一种用于合成从聚焦在单个场景上的多个音频和视频源接收的信息的系统和方法。 系统可以基于在捕获设备的图像数据中识别的一组公共信号来确定捕获设备的位置。 由于场景通常可能使用户和对象进出场景,来自多个捕获设备的数据可能是时间同步的,以确保来自音频和视频源的数据同时提供相同场景的数据。 来自多个源的音频和/或视觉数据可以一起协调和同化,以提高系统从场景中解释音频和/或视觉方面的能力。