Multiple device sensor input based avatar

    公开(公告)号:US11620780B2

    公开(公告)日:2023-04-04

    申请号:US16951339

    申请日:2020-11-18

    摘要: Examples are disclosed that relate to utilizing image sensor inputs from different devices having different perspectives in physical space to construct an avatar of a first user in a video stream. The avatar comprises a three-dimensional representation of at least a portion of a face of the first user texture mapped onto a three-dimensional body simulation that follows actual physical movement of the first user. The three-dimensional body simulation of the first user is generated based on image data received from an imaging device and image sensor data received from a head-mounted display device both associated with the first user. The three-dimensional representation of the face of the first user is generated based on the image data received from the imaging device. The resulting video stream is sent, via a communication network, to a display device associated with a second user.

    Wearable behavior-based vision system
    3.
    发明授权
    Wearable behavior-based vision system 有权
    可穿戴行为视觉系统

    公开(公告)号:US09395543B2

    公开(公告)日:2016-07-19

    申请号:US13740165

    申请日:2013-01-12

    摘要: A see through display apparatus includes a see-through, head mounted display and sensors on the display which detect audible and visual data in a field of view of the apparatus. A processor cooperates with the display to provide information to a wearer of the device using a behavior-based real object mapping system. At least a global zone and an egocentric behavioral zone relative to the apparatus are established, and real objects assigned behaviors that are mapped to the respective zones occupied by the object. The behaviors assigned to the objects can be used by applications that provide services to the wearer, using the behaviors as the foundation for evaluation of the type of feedback to provide in the apparatus.

    摘要翻译: 透视显示装置包括透视式,头戴式显示器和显示器上的传感器,其检测设备的视场中的可听和可视数据。 处理器与显示器协作以使用基于行为的真实对象映射系统向设备的佩戴者提供信息。 至少建立了相对于装置的全局区域和自我中心行为区域,并且真实对象分配映射到对象占据的各个区域的行为。 分配给对象的行为可以由为佩戴者提供服务的应用程序使用,使用该行为作为在设备中提供的反馈类型的评估的基础。

    Location-based entity selection using gaze tracking

    公开(公告)号:US11429186B2

    公开(公告)日:2022-08-30

    申请号:US16951940

    申请日:2020-11-18

    摘要: One example provides a computing device comprising instructions executable to receive information regarding one or more entities in the scene, to receive eye tracking a plurality of eye tracking samples, each eye tracking sample corresponding to a gaze direction of a user and, based at least on the eye tracking samples, determine a time-dependent attention value for each entity of the one or more entities at different locations in a use environment, the time-dependent attention value determined using a leaky integrator. The instructions are further executable to receive a user input indicating an intent to perform a location-dependent action, associate the user input to with a selected entity based at least upon the time-dependent attention value for each entity, and perform the location-dependent action based at least upon a location of the selected entity.

    Radial selection by vestibulo-ocular reflex fixation
    8.
    发明授权
    Radial selection by vestibulo-ocular reflex fixation 有权
    通过前庭眼反射固定的径向选择

    公开(公告)号:US09552060B2

    公开(公告)日:2017-01-24

    申请号:US14166778

    申请日:2014-01-28

    摘要: Methods for enabling hands-free selection of objects within an augmented reality environment are described. In some embodiments, an object may be selected by an end user of a head-mounted display device (HMD) based on detecting a vestibulo-ocular reflex (VOR) with the end user's eyes while the end user is gazing at the object and performing a particular head movement for selecting the object. The object selected may comprise a real object or a virtual object. The end user may select the object by gazing at the object for a first time period and then performing a particular head movement in which the VOR is detected for one or both of the end user's eyes. In one embodiment, the particular head movement may involve the end user moving their head away from a direction of the object at a particular head speed while gazing at the object.

    摘要翻译: 描述了在增强现实环境中实现免提选择对象的方法。 在一些实施例中,可以由头戴式显示装置(HMD)的最终用户基于在最终用户注视物体时检测与最终用户的眼睛的前庭眼反射(VOR)来选择对象并执行对象 用于选择对象的特定头部移动。 所选择的对象可以包括真实对象或虚拟对象。 最终用户可以通过在第一时间段内注视对象来选择对象,然后执行其中针对终端用户的眼睛中的一个或两个检测到VOR的特定头部移动。 在一个实施例中,特定头部移动可以涉及最终用户在注视物体时以其特定的头部速度将头部从物体的方向移开。