MODELING AN OBJECT FROM IMAGE DATA
    11.
    发明申请
    MODELING AN OBJECT FROM IMAGE DATA 有权
    从图像数据建模对象

    公开(公告)号:US20120154618A1

    公开(公告)日:2012-06-21

    申请号:US12969427

    申请日:2010-12-15

    IPC分类号: H04N5/228 G06T13/00 G06K9/00

    CPC分类号: G06T15/205

    摘要: A method for modeling an object from image data comprises identifying in an image from the video a set of reference points on the object, and, for each reference point identified, observing a displacement of that reference point in response to a motion of the object. The method further comprises grouping together those reference points for which a common translational or rotational motion of the object results in the observed displacement, and fitting the grouped-together reference points to a shape.

    摘要翻译: 一种用于从图像数据建模对象的方法包括从图像中的图像中识别对象上的一组参考点,并且针对被识别的每个参考点,响应于对象的运动观察该参考点的位移。 该方法还包括将对象的公共平移或旋转运动导致观察到的位移并将分组在一起的参考点拟合为一个形状的那些参考点分组。

    AUTOMATIC DEPTH CAMERA AIMING
    12.
    发明申请
    AUTOMATIC DEPTH CAMERA AIMING 有权
    自动深度摄像机

    公开(公告)号:US20110299728A1

    公开(公告)日:2011-12-08

    申请号:US12794388

    申请日:2010-06-04

    IPC分类号: G06K9/00 H04N13/02

    摘要: Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic.

    摘要翻译: 通过一种方法提供自动深度相机瞄准,其包括从深度摄像机接收一个或多个观察到的场景的深度图像。 该方法还包括如果在场景内找到目标的兴趣点,则确定该兴趣点是否在相对于深度相机的远的范围内。 该方法还包括,如果目标的兴趣点在远的范围内,以远的逻辑操作深度相机,或者如果目标的兴趣点不在远的范围内,则使用 接近逻辑

    Gesture Shortcuts
    13.
    发明申请
    Gesture Shortcuts 有权
    手势捷径

    公开(公告)号:US20100306714A1

    公开(公告)日:2010-12-02

    申请号:US12474781

    申请日:2009-05-29

    IPC分类号: G06F3/01

    摘要: Systems, methods and computer readable media are disclosed for gesture shortcuts. A user's movement or body position is captured by a capture device of a system, and is used as input to control the system. For a system-recognized gesture, there may be a full version of the gesture and a shortcut of the gesture. Where the system recognizes that either the full version of the gesture or the shortcut of the gesture has been performed, it sends an indication that the system-recognized gesture was observed to a corresponding application. Where the shortcut comprises a subset of the full version of the gesture, and both the shortcut and the full version of the gesture are recognized as the user performs the full version of the gesture, the system recognizes that only a single performance of the gesture has occurred, and indicates to the application as such.

    摘要翻译: 公开了用于手势快捷方式的系统,方法和计算机可读介质。 用户的移动或身体位置由系统的捕获设备捕获,并被用作控制系统的输入。 对于系统识别的手势,可能存在手势的完整版本和手势的快捷方式。 在系统识别手势的完整版本或手势的快捷方式已被执行的情况下,它发送指示系统识别的手势被观察到相应的应用程序。 其中快捷方式包括手势的完整版本的子集,并且当用户执行手势的完整版本时,识别手势的快捷方式和完整版本,系统识别出仅手势的单一表现具有 发生并向应用程序指示。

    Shared collaboration using display device
    17.
    发明授权
    Shared collaboration using display device 有权
    使用显示设备共享协作

    公开(公告)号:US09063566B2

    公开(公告)日:2015-06-23

    申请号:US13308350

    申请日:2011-11-30

    摘要: Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user.

    摘要翻译: 提供了用于共享协作系统和用于使活动用户能够与一个或多个附加用户和协作项目交互的相关方法的各种实施例。 在一个实施例中,头戴式显示设备可操作地连接到包括协作引擎程序的计算设备。 该程序与协作项目一起从头戴式显示装置接收物理空间的观察信息。 该程序目视地增强了通过头戴式显示装置所看到的物理空间的外观,以包括协作项目的活动用户协作项目表示。 该程序使用从附加用户输入的附加用户协作项来填充活动的用户协作项表示。

    Modeling an object from image data
    18.
    发明授权
    Modeling an object from image data 有权
    从图像数据建模对象

    公开(公告)号:US08884968B2

    公开(公告)日:2014-11-11

    申请号:US12969427

    申请日:2010-12-15

    CPC分类号: G06T15/205

    摘要: A method for modeling an object from image data comprises identifying in an image from the video a set of reference points on the object, and, for each reference point identified, observing a displacement of that reference point in response to a motion of the object. The method further comprises grouping together those reference points for which a common translational or rotational motion of the object results in the observed displacement, and fitting the grouped-together reference points to a shape.

    摘要翻译: 一种用于从图像数据建模对象的方法包括从图像中的图像中识别对象上的一组参考点,并且针对被识别的每个参考点,响应于对象的运动观察该参考点的位移。 该方法还包括将对象的公共平移或旋转运动导致观察到的位移并将分组在一起的参考点拟合为一个形状的那些参考点分组。

    Interacting with user interface via avatar
    19.
    发明授权
    Interacting with user interface via avatar 有权
    通过头像与用户界面进行交互

    公开(公告)号:US08749557B2

    公开(公告)日:2014-06-10

    申请号:US12814237

    申请日:2010-06-11

    CPC分类号: G06F3/011 G06F3/017 G06T13/40

    摘要: Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.

    摘要翻译: 公开了涉及通过由化身提供的反馈与用户界面交互的实施例。 一个实施例提供了一种方法,包括接收深度数据,将人定位在深度数据中,以及将人的前面的物理空间映射到显示设备的屏幕空间。 该方法还包括形成表示人物的化身的图像,向显示器输出包括交互式用户界面控制的用户界面的图像,并向显示设备输出化身的图像,使得化身面向用户界面 控制。 所述方法还包括:经由所述深度数据检测所述人的运动,基于所述人的运动形成与所述用户界面控制相互作用的所述化身的动画表示,以及输出与所述控件交互的所述化身的动画表示。