-
公开(公告)号:US11340707B2
公开(公告)日:2022-05-24
申请号:US16888562
申请日:2020-05-29
Applicant: Microsoft Technology Licensing, LLC
Inventor: Julia Schwarz , Michael Harley Notter , Jenny Kam , Sheng Kai Tang , Kenneth Mitchell Jakubzak , Adam Edwin Behringer , Amy Mun Hong , Joshua Kyle Neff , Sophie Stellmach , Mathew J. Lamb , Nicholas Ferianc Kamuda
Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
-
公开(公告)号:US20150149288A1
公开(公告)日:2015-05-28
申请号:US14612169
申请日:2015-02-02
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
Inventor: Cesare John Saretto , Peter Tobias Kinnebrew , Nicholas Ferianc Kamuda , Henry Hooper Somuah , Matthew John McCloskey , Douglas C. Hebenthal , Kathleen P. Mulcahy
CPC classification number: G06F17/30867 , G06Q30/02 , G06Q30/0201 , G06Q30/0259 , G06Q30/0261 , G06Q30/0269 , G06Q50/01 , H04N21/2668 , H04N21/44222 , H04N21/6582 , H04W4/023 , H04W4/185 , H04W4/21
Abstract: A system automatically and continuously finds and aggregates the most relevant and current information about the people and things that a user cares about. The information gathering is based on current context (e.g., where the user is, what the user is doing, what the user is saying/typing, etc.). The result of the context based information gathering is presented ubiquitously on user interfaces of any of the various physical devices operated by the user.
Abstract translation: 系统自动且持续地查找和汇总关于用户关心的人物和事物的最相关和最新的信息。 信息收集基于当前上下文(例如,用户在哪里,用户正在做什么,用户在说/打字等等)。 基于上下文的信息收集的结果被无处不在地呈现在由用户操作的各种物理设备中的任何一个的用户界面上。
-
公开(公告)号:US11703994B2
公开(公告)日:2023-07-18
申请号:US17661087
申请日:2022-04-28
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sheng Kai Tang , Julia Schwarz , Thomas Matthew Gable , Casey Leon Meekhof , Nahil Tawfik Sharkasi , Nicholas Ferianc Kamuda , Joshua Kyle Neff , Alton Kwok
IPC: G06F3/04815 , G06F3/01 , G02B27/00 , G02B27/01
CPC classification number: G06F3/04815 , G02B27/0093 , G02B27/017 , G06F3/017 , G02B2027/0178
Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
-
公开(公告)号:US11294472B2
公开(公告)日:2022-04-05
申请号:US16365114
申请日:2019-03-26
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sheng Kai Tang , Julia Schwarz , Thomas Matthew Gable , Casey Leon Meekhof , Chuan Qin , Nahil Tawfik Sharkasi , Nicholas Ferianc Kamuda , Ramiro S. Torres , Joshua Kyle Neff , Jamie Bryant Kirschenbaum , Neil Richard Kronlage
IPC: G06F3/01 , G06N3/02 , G06F3/04815
Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.
-
公开(公告)号:US11277652B2
公开(公告)日:2022-03-15
申请号:US16921878
申请日:2020-07-06
Applicant: Microsoft Technology Licensing, LLC
Inventor: Cesare John Saretto , Peter Tobias Kinnebrew , Nicholas Ferianc Kamuda , Henry Hooper Somuah , Matthew John McCloskey , Douglas C. Hebenthal , Kathleen P. Mulcahy
IPC: H04N21/2668 , H04W4/21 , G06F16/9535 , G06Q50/00 , H04N21/442 , H04N21/658 , H04W4/02 , H04W4/18 , G06Q30/02
Abstract: A system finds and aggregates the most relevant and current information about the people and things that a user cares about. The information gathering is based on current context (e.g., where the user is, what the user is doing, what the user is saying/typing, etc.). The result of the context based information gathering is presented ubiquitously on user interfaces of any of the various physical devices operated by the user.
-
公开(公告)号:US10735796B2
公开(公告)日:2020-08-04
申请号:US15976769
申请日:2018-05-10
Applicant: Microsoft Technology Licensing, LLC
Inventor: Cesare John Saretto , Peter Tobias Kinnebrew , Nicholas Ferianc Kamuda , Henry Hooper Somuah , Matthew John McCloskey , Douglas C. Hebenthal , Kathleen P. Mulcahy
IPC: H04N21/2668 , G06F16/9535 , G06Q50/00 , H04W4/02 , H04W4/21 , H04N21/442 , H04N21/658 , H04W4/18 , G06Q30/02
Abstract: A system finds and aggregates the most relevant and current information about the people and things that a user cares about. The information gathering is based on current context (e.g., where the user is, what the user is doing, what the user is saying/typing, etc.). The result of the context based information gathering is presented ubiquitously on user interfaces of any of the various physical devices operated by the user.
-
公开(公告)号:US20170244996A1
公开(公告)日:2017-08-24
申请号:US15590986
申请日:2017-05-09
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
Inventor: Cesare John Saretto , Peter Tobias Kinnebrew , Nicholas Ferianc Kamuda , Henry Hooper Somuah , Matthew John McCloskey , Douglas C. Hebenthal , Kathleen P. Mulcahy
IPC: H04N21/2668 , H04N21/442 , G06Q30/02 , G06F17/30 , H04W4/18 , H04W4/20 , H04W4/02 , H04N21/658 , G06Q50/00
CPC classification number: G06F17/30867 , G06Q30/02 , G06Q30/0201 , G06Q30/0259 , G06Q30/0261 , G06Q30/0269 , G06Q50/01 , H04N21/2668 , H04N21/44222 , H04N21/6582 , H04W4/023 , H04W4/185 , H04W4/21
Abstract: A system automatically and continuously finds and aggregates the most relevant and current information about the people and things that a user cares about. The information gathering is based on current context (e.g., where the user is, what the user is doing, what the user is saying/typing, etc.). The result of the context based information gathering is presented ubiquitously on user interfaces of any of the various physical devices operated by the user.
-
公开(公告)号:US09230368B2
公开(公告)日:2016-01-05
申请号:US13901342
申请日:2013-05-23
Applicant: Microsoft Technology Licensing, LLC
Inventor: Brian E. Keane , Ben J. Sugden , Robert L. Crocco, Jr. , Daniel Deptford , Tom G. Salter , Laura K. Massey , Alex Aben-Athar Kipman , Peter Tobias Kinnebrew , Nicholas Ferianc Kamuda
IPC: G06T19/00 , G06F3/0487 , G06F3/01 , G06F3/0481
CPC classification number: G06T19/006 , G06F3/011 , G06F3/04815 , G06F3/0487
Abstract: A system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. When a user is moving through the mixed reality environment, the virtual objects may remain world-locked, so that the user can move around and explore the virtual objects from different perspectives. When the user is motionless in the mixed reality environment, the virtual objects may rotate to face the user so that the user can easily view and interact with the virtual objects.
Abstract translation: 公开了一种系统和方法,用于以混合现实环境中的虚拟对象以对于用户与虚拟对象交互是最佳和最舒适的方式来显示虚拟对象。 当用户移动通过混合现实环境时,虚拟对象可能保持世界锁定,从而用户可以从不同的角度移动并探索虚拟对象。 当用户在混合现实环境中静止时,虚拟对象可以旋转以面对用户,使得用户可以容易地查看虚拟对象并与虚拟对象交互。
-
公开(公告)号:US11755122B2
公开(公告)日:2023-09-12
申请号:US17664520
申请日:2022-05-23
Applicant: Microsoft Technology Licensing, LLC
Inventor: Julia Schwarz , Michael Harley Notter , Jenny Kam , Sheng Kai Tang , Kenneth Mitchell Jakubzak , Adam Edwin Behringer , Amy Mun Hong , Joshua Kyle Neff , Sophie Stellmach , Mathew J. Lamb , Nicholas Ferianc Kamuda
CPC classification number: G06F3/017 , G06F3/012 , G06F3/013 , G06F3/167 , G06T13/40 , G06V20/20 , G06V40/107 , G06V40/28
Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
-
公开(公告)号:US11107265B2
公开(公告)日:2021-08-31
申请号:US16299052
申请日:2019-03-11
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sheng Kai Tang , Julia Schwarz , Jason Michael Ray , Sophie Stellmach , Thomas Matthew Gable , Casey Leon Meekhof , Nahil Tawfik Sharkasi , Nicholas Ferianc Kamuda , Ramiro S. Torres , Kevin John Appel , Jamie Bryant Kirschenbaum
Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
-
-
-
-
-
-
-
-
-