-
公开(公告)号:US11620780B2
公开(公告)日:2023-04-04
申请号:US16951339
申请日:2020-11-18
摘要: Examples are disclosed that relate to utilizing image sensor inputs from different devices having different perspectives in physical space to construct an avatar of a first user in a video stream. The avatar comprises a three-dimensional representation of at least a portion of a face of the first user texture mapped onto a three-dimensional body simulation that follows actual physical movement of the first user. The three-dimensional body simulation of the first user is generated based on image data received from an imaging device and image sensor data received from a head-mounted display device both associated with the first user. The three-dimensional representation of the face of the first user is generated based on the image data received from the imaging device. The resulting video stream is sent, via a communication network, to a display device associated with a second user.
-
公开(公告)号:US11340707B2
公开(公告)日:2022-05-24
申请号:US16888562
申请日:2020-05-29
发明人: Julia Schwarz , Michael Harley Notter , Jenny Kam , Sheng Kai Tang , Kenneth Mitchell Jakubzak , Adam Edwin Behringer , Amy Mun Hong , Joshua Kyle Neff , Sophie Stellmach , Mathew J. Lamb , Nicholas Ferianc Kamuda
摘要: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
-
公开(公告)号:US09395543B2
公开(公告)日:2016-07-19
申请号:US13740165
申请日:2013-01-12
CPC分类号: G02B27/017 , G01C21/206 , G02B27/0093 , G02B2027/0138 , G02B2027/014 , G02B2027/0178 , G06F3/012
摘要: A see through display apparatus includes a see-through, head mounted display and sensors on the display which detect audible and visual data in a field of view of the apparatus. A processor cooperates with the display to provide information to a wearer of the device using a behavior-based real object mapping system. At least a global zone and an egocentric behavioral zone relative to the apparatus are established, and real objects assigned behaviors that are mapped to the respective zones occupied by the object. The behaviors assigned to the objects can be used by applications that provide services to the wearer, using the behaviors as the foundation for evaluation of the type of feedback to provide in the apparatus.
摘要翻译: 透视显示装置包括透视式,头戴式显示器和显示器上的传感器,其检测设备的视场中的可听和可视数据。 处理器与显示器协作以使用基于行为的真实对象映射系统向设备的佩戴者提供信息。 至少建立了相对于装置的全局区域和自我中心行为区域,并且真实对象分配映射到对象占据的各个区域的行为。 分配给对象的行为可以由为佩戴者提供服务的应用程序使用,使用该行为作为在设备中提供的反馈类型的评估的基础。
-
公开(公告)号:US11960790B2
公开(公告)日:2024-04-16
申请号:US17332092
申请日:2021-05-27
发明人: Austin S. Lee , Jonathan Kyle Palmer , Anthony James Ambrus , Mathew J. Lamb , Sheng Kai Tang , Sophie Stellmach
IPC分类号: G06F3/04817 , G06F3/01 , G06F3/04815 , G06F3/0484 , G06F3/16
CPC分类号: G06F3/167 , G06F3/013 , G06F3/017 , G06F3/04815 , G06F3/0484
摘要: A computer implemented method includes detecting user interaction with mixed reality displayed content in a mixed reality system. User focus is determined as a function of the user interaction based on the user interaction using a spatial intent model. A length of time for extending voice engagement with the mixed reality system is modified based on the determined user focus. Detecting user interaction with the displayed content may include tracking eye movements to determine objects in the displayed content at which the user is looking and determining a context of a user dialog during the voice engagement.
-
公开(公告)号:US09524081B2
公开(公告)日:2016-12-20
申请号:US14687877
申请日:2015-04-15
发明人: Brian E. Keane , Ben J. Sugden , Robert L. Crocco, Jr. , Christopher E. Miles , Kathryn Stone Perez , Laura K. Massey , Mathew J. Lamb , Alex Aben-Athar Kipman
IPC分类号: G06F3/0483 , G10L21/10 , G06F3/01 , G06T19/00 , G06F3/14 , G10L15/26 , G10L25/03 , G09B5/06 , G03H1/22
CPC分类号: G06F3/0483 , G03H1/2294 , G03H2001/2284 , G03H2227/02 , G03H2270/55 , G06F3/011 , G06F3/1415 , G06T19/006 , G09B5/062 , G10L15/26 , G10L21/10 , G10L25/03 , G10L2021/105
摘要: A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids.
-
公开(公告)号:US11755122B2
公开(公告)日:2023-09-12
申请号:US17664520
申请日:2022-05-23
发明人: Julia Schwarz , Michael Harley Notter , Jenny Kam , Sheng Kai Tang , Kenneth Mitchell Jakubzak , Adam Edwin Behringer , Amy Mun Hong , Joshua Kyle Neff , Sophie Stellmach , Mathew J. Lamb , Nicholas Ferianc Kamuda
CPC分类号: G06F3/017 , G06F3/012 , G06F3/013 , G06F3/167 , G06T13/40 , G06V20/20 , G06V40/107 , G06V40/28
摘要: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
-
公开(公告)号:US11429186B2
公开(公告)日:2022-08-30
申请号:US16951940
申请日:2020-11-18
发明人: Austin S. Lee , Mathew J. Lamb , Anthony James Ambrus , Amy Mun Hong , Jonathan Palmer , Sophie Stellmach
摘要: One example provides a computing device comprising instructions executable to receive information regarding one or more entities in the scene, to receive eye tracking a plurality of eye tracking samples, each eye tracking sample corresponding to a gaze direction of a user and, based at least on the eye tracking samples, determine a time-dependent attention value for each entity of the one or more entities at different locations in a use environment, the time-dependent attention value determined using a leaky integrator. The instructions are further executable to receive a user input indicating an intent to perform a location-dependent action, associate the user input to with a selected entity based at least upon the time-dependent attention value for each entity, and perform the location-dependent action based at least upon a location of the selected entity.
-
公开(公告)号:US09552060B2
公开(公告)日:2017-01-24
申请号:US14166778
申请日:2014-01-28
发明人: Anthony J. Ambrus , Adam G. Poulos , Lewey Alec Geselowitz , Dan Kroymann , Arthur C. Tomlin , Roger Sebastian-Kevin Sylvan , Mathew J. Lamb , Brian J. Mount
IPC分类号: G09G5/00 , G06F3/01 , G06F3/0484 , G06F3/0482
CPC分类号: G06F3/013 , G06F3/012 , G06F3/0482 , G06F3/04842
摘要: Methods for enabling hands-free selection of objects within an augmented reality environment are described. In some embodiments, an object may be selected by an end user of a head-mounted display device (HMD) based on detecting a vestibulo-ocular reflex (VOR) with the end user's eyes while the end user is gazing at the object and performing a particular head movement for selecting the object. The object selected may comprise a real object or a virtual object. The end user may select the object by gazing at the object for a first time period and then performing a particular head movement in which the VOR is detected for one or both of the end user's eyes. In one embodiment, the particular head movement may involve the end user moving their head away from a direction of the object at a particular head speed while gazing at the object.
摘要翻译: 描述了在增强现实环境中实现免提选择对象的方法。 在一些实施例中,可以由头戴式显示装置(HMD)的最终用户基于在最终用户注视物体时检测与最终用户的眼睛的前庭眼反射(VOR)来选择对象并执行对象 用于选择对象的特定头部移动。 所选择的对象可以包括真实对象或虚拟对象。 最终用户可以通过在第一时间段内注视对象来选择对象,然后执行其中针对终端用户的眼睛中的一个或两个检测到VOR的特定头部移动。 在一个实施例中,特定头部移动可以涉及最终用户在注视物体时以其特定的头部速度将头部从物体的方向移开。
-
-
-
-
-
-
-