Integrated interactive space
    2.
    发明授权
    Integrated interactive space 有权
    集成交互空间

    公开(公告)号:US09584766B2

    公开(公告)日:2017-02-28

    申请号:US14730061

    申请日:2015-06-03

    CPC classification number: H04N7/157 H04N7/144

    Abstract: Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space.

    Abstract translation: 描述了实现一体化交互空间的技术。 在实现中,定位成在不同位置捕获视频的摄像机被同步,使得不同位置的方面可以用于生成集成的交互式空间。 集成的交互式空间可以使不同位置的用户进行交互,例如通过视频交互,音频交互等。 在至少一些实施例中,可以实现技术来在视频会话期间调整参与者的图像,使得参与者似乎与其他位置处的其他视频会话参与者保持目光接触。 还可以实现技术来提供可以使用户与空间交互的虚拟共享空间,还可以使用户能够彼此交互和/或虚拟共享空间中显示的对象。

    Interacting with user interface via avatar
    4.
    发明授权
    Interacting with user interface via avatar 有权
    通过头像与用户界面进行交互

    公开(公告)号:US09292083B2

    公开(公告)日:2016-03-22

    申请号:US14290749

    申请日:2014-05-29

    CPC classification number: G06F3/011 G06F3/017 G06T13/40

    Abstract: Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.

    Abstract translation: 公开了涉及通过由化身提供的反馈与用户界面交互的实施例。 一个实施例提供了一种方法,包括接收深度数据,将人定位在深度数据中,以及将人的前面的物理空间映射到显示设备的屏幕空间。 该方法还包括形成表示人物的化身的图像,向显示器输出包括交互式用户界面控制的用户界面的图像,并向显示设备输出化身的图像,使得化身面向用户界面 控制。 所述方法还包括:经由所述深度数据检测所述人的运动,基于所述人的运动形成与所述用户界面控制相互作用的所述化身的动画表示,以及输出与所述控件交互的所述化身的动画表示。

    Mixed reality interactions
    9.
    发明授权

    公开(公告)号:US10510190B2

    公开(公告)日:2019-12-17

    申请号:US15694476

    申请日:2017-09-01

    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.

Patent Agency Ranking