-
公开(公告)号:US11960790B2
公开(公告)日:2024-04-16
申请号:US17332092
申请日:2021-05-27
Applicant: Microsoft Technology Licensing, LLC
Inventor: Austin S. Lee , Jonathan Kyle Palmer , Anthony James Ambrus , Mathew J. Lamb , Sheng Kai Tang , Sophie Stellmach
IPC: G06F3/04817 , G06F3/01 , G06F3/04815 , G06F3/0484 , G06F3/16
CPC classification number: G06F3/167 , G06F3/013 , G06F3/017 , G06F3/04815 , G06F3/0484
Abstract: A computer implemented method includes detecting user interaction with mixed reality displayed content in a mixed reality system. User focus is determined as a function of the user interaction based on the user interaction using a spatial intent model. A length of time for extending voice engagement with the mixed reality system is modified based on the determined user focus. Detecting user interaction with the displayed content may include tracking eye movements to determine objects in the displayed content at which the user is looking and determining a context of a user dialog during the voice engagement.
-
公开(公告)号:US11270672B1
公开(公告)日:2022-03-08
申请号:US17087435
申请日:2020-11-02
Applicant: Microsoft Technology Licensing, LLC
Inventor: Austin S. Lee , Anthony James Ambrus , Mathew Julian Lamb , Sophie Stellmach , Keiichi Matsuda
Abstract: Examples are disclosed herein relating to displaying a virtual assistant. One example provides an augmented reality display device comprising a see-through display, a logic subsystem, and a storage subsystem storing instructions executable by the logic subsystem to display via the see-through display a virtual assistant associated with a location in a real-world environment, detect a change in a field of view of the see-through display, and when the virtual assistant is out of the field of view of the see-through display after the change in the field of view, display the virtual assistant in a virtual window on the see-through display.
-
公开(公告)号:US12216832B2
公开(公告)日:2025-02-04
申请号:US18463906
申请日:2023-09-08
Applicant: Microsoft Technology Licensing, LLC
Inventor: Julia Schwarz , Bugra Tekin , Sophie Stellmach , Erian Vazquez , Casey Leon Meekhof , Fabian Gobel
Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.
-
公开(公告)号:US11620000B1
公开(公告)日:2023-04-04
申请号:US17710940
申请日:2022-03-31
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
Inventor: Sophie Stellmach , Julia Schwarz , Erian Vazquez , Kristian Jose Davila , Thomas Matthew Gable , Adam Behringer
Abstract: The techniques disclosed herein provide systems that can control the invocation of precision input mode. A system can initially utilize a first input device, such as a head-mounted display device monitoring the eye gaze direction of a user to control the location of an input target. When one or more predetermined input gestures are detected, the system can then invoke a precision mode that transitions the control of the input target from the first input device to a second input device. The second device can include another input device utilizing different input modalities, such as a sensor detecting one or more hand gestures of the user. The predetermined input gestures can include a fixation input gesture, voice commands, or other gestures that may include the use of a user's hands or head. By controlling the invocation of precision input mode using specific gestures, a system can mitigate device coordination issues.
-
公开(公告)号:US11567633B2
公开(公告)日:2023-01-31
申请号:US17170696
申请日:2021-02-08
Applicant: Microsoft Technology Licensing, LLC
Inventor: Julia Schwarz , Andrew D. Wilson , Sophie Stellmach , Erian Vazquez , Kristian Jose Davila , Adam Edwin Behringer , Jonathan Palmer , Jason Michael Ray , Mathew Julian Lamb
IPC: G06F3/0482 , G06F3/01 , G06F3/04815 , G06F3/16 , G06F3/0346
Abstract: A computer-implemented method for determining focus of a user is provided. User input is received. An intention image of a scene including a plurality of interactive objects is generated. The intention image includes pixels encoded with intention values determined based on the user input. An intention value indicates a likelihood that the user intends to focus on the pixel. An intention score is determined for each interactive object based on the intention values of pixels that correspond to the interactive object. An interactive object of the plurality of interactive objects is determined to be a focused object that has the user's focus based on the intention scores of the plurality of interactive objects.
-
公开(公告)号:US11340707B2
公开(公告)日:2022-05-24
申请号:US16888562
申请日:2020-05-29
Applicant: Microsoft Technology Licensing, LLC
Inventor: Julia Schwarz , Michael Harley Notter , Jenny Kam , Sheng Kai Tang , Kenneth Mitchell Jakubzak , Adam Edwin Behringer , Amy Mun Hong , Joshua Kyle Neff , Sophie Stellmach , Mathew J. Lamb , Nicholas Ferianc Kamuda
Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
-
公开(公告)号:US11461955B2
公开(公告)日:2022-10-04
申请号:US17445704
申请日:2021-08-23
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sheng Kai Tang , Julia Schwarz , Jason Michael Ray , Sophie Stellmach , Thomas Matthew Gable , Casey Leon Meekhof , Nahil Tawfik Sharkasi , Nicholas Ferianc Kamuda , Ramiro S. Torres , Kevin John Appel , Jamie Bryant Kirschenbaum
Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
-
8.
公开(公告)号:US10890967B2
公开(公告)日:2021-01-12
申请号:US16030234
申请日:2018-07-09
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sophie Stellmach , Sheng Kai Tang , Casey Leon Meekhof , Julia Schwarz , Nahil Tawfik Sharkasi , Thomas Matthew Gable
IPC: G09G5/00 , G06F3/01 , G02B27/01 , G06F3/0481 , G06K9/00
Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.
-
公开(公告)号:US10831030B2
公开(公告)日:2020-11-10
申请号:US15958632
申请日:2018-04-20
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sophie Stellmach
Abstract: A method for improving visual interaction with a virtual environment includes measuring a position of a user's gaze relative to a virtual element, presenting a visual cue when the user's gaze overlaps the virtual element, and guiding the user's gaze toward an origin of the virtual element with the visual cue.
-
公开(公告)号:US20200225736A1
公开(公告)日:2020-07-16
申请号:US16297237
申请日:2019-03-08
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
Inventor: Julia Schwarz , Sheng Kai Tang , Casey Leon Meekhof , Nahil Tawfik Sharkasi , Sophie Stellmach
Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. In some instances, a mixed-reality display device presents a mixed-reality environment to a user which includes one or more holograms. The display device then detects a user gesture input associated with a user control (which may include a part of the user's body) during presentation of the mixed-reality environment. In response to detecting the user gesture, the display device selectively generates and displays a corresponding control ray as a hologram rendered by the display device extending away from the user control within the mixed-reality environment. Gestures may also be detected for selectively disabling control rays so that they are no longer rendered.
-
-
-
-
-
-
-
-
-