Inferring spatial object descriptions from spatial gestures

    公开(公告)号:US09613261B2

    公开(公告)日:2017-04-04

    申请号:US14457048

    申请日:2014-08-11

    Abstract: Three-dimensional (3-D) spatial image data may be received that is associated with at least one arm motion of an actor based on free-form movements of at least one hand of the actor, based on natural gesture motions of the at least one hand. A plurality of sequential 3-D spatial representations that each include 3-D spatial map data corresponding to a 3-D posture and position of the hand at sequential instances of time during the free-form movements may be determined, based on the received 3-D spatial image data. An integrated 3-D model may be generated, via a spatial object processor, based on incrementally integrating the 3-D spatial map data included in the determined sequential 3-D spatial representations and comparing a threshold time value with model time values indicating numbers of instances of time spent by the hand occupying a plurality of 3-D spatial regions during the free-form movements.

    Projectors and depth cameras for deviceless augmented reality and interaction
    2.
    发明授权
    Projectors and depth cameras for deviceless augmented reality and interaction 有权
    投影机和深度相机,用于无障碍增强现实和互动

    公开(公告)号:US09509981B2

    公开(公告)日:2016-11-29

    申请号:US14281885

    申请日:2014-05-19

    CPC classification number: H04N13/275 G06F3/011 G06F3/04815

    Abstract: Architecture that combines multiple depth cameras and multiple projectors to cover a specified space (e.g., a room). The cameras and projectors are calibrated, allowing the development of a multi-dimensional (e.g., 3D) model of the objects in the space, as well as the ability to project graphics in a controlled fashion on the same objects. The architecture incorporates the depth data from all depth cameras, as well as color information, into a unified multi-dimensional model in combination with calibrated projectors. In order to provide visual continuity when transferring objects between different locations in the space, the user's body can provide a canvas on which to project this interaction. As the user moves body parts in the space, without any other object, the body parts can serve as temporary “screens” for “in-transit” data.

    Abstract translation: 结合多个深度摄像机和多台投影机以覆盖指定空间(例如,房间)的架构。 相机和投影仪被校准,允许开发空间中的对象的多维(例如,3D)模型,以及以相应对象的受控方式投影图形的能力。 该架构将所有深度摄像机的深度数据以及颜色信息与校准的投影机结合使用,成为统一的多维模型。 为了在空间中的不同位置之间传输对象时提供视觉连续性,用户的身体可以提供一个画布来投射这个交互。 当用户移动空间中的身体部位而没有任何其他物体时,身体部位可以作为“过境”数据的临时“屏幕”。

    Non-verbal engagement of a virtual assistant

    公开(公告)号:US11221669B2

    公开(公告)日:2022-01-11

    申请号:US15849160

    申请日:2017-12-20

    Abstract: Systems and methods related to engaging with a virtual assistant via ancillary input are provided. Ancillary input may refer to non-verbal, non-tactile input based on eye-gaze data and/or eye-gaze attributes, including but not limited to, facial recognition data, motion or gesture detection, eye-contact data, head-pose or head-position data, and the like. Thus, to initiate and/or maintain interaction with a virtual assistant, a user need not articulate an attention word or words. Rather the user may initiate and/or maintain interaction with a virtual assistant more naturally and may even include the virtual assistant in a human conversation with multiple speakers. The virtual assistant engagement system may utilize at least one machine-learning algorithm to more accurately determine whether a user desires to engage with and/or maintain interaction with a virtual assistant. Various hardware configurations associated with a virtual assistant device may allow for both near-field and/or far-field engagement.

Patent Agency Ranking