-
公开(公告)号:US11768544B2
公开(公告)日:2023-09-26
申请号:US17649659
申请日:2022-02-01
Applicant: Microsoft Technology Licensing, LLC
Inventor: Julia Schwarz , Bugra Tekin , Sophie Stellmach , Erian Vazquez , Casey Leon Meekhof , Fabian Gobel
Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.
-
公开(公告)号:US11755122B2
公开(公告)日:2023-09-12
申请号:US17664520
申请日:2022-05-23
Applicant: Microsoft Technology Licensing, LLC
Inventor: Julia Schwarz , Michael Harley Notter , Jenny Kam , Sheng Kai Tang , Kenneth Mitchell Jakubzak , Adam Edwin Behringer , Amy Mun Hong , Joshua Kyle Neff , Sophie Stellmach , Mathew J. Lamb , Nicholas Ferianc Kamuda
CPC classification number: G06F3/017 , G06F3/012 , G06F3/013 , G06F3/167 , G06T13/40 , G06V20/20 , G06V40/107 , G06V40/28
Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
-
公开(公告)号:US11429186B2
公开(公告)日:2022-08-30
申请号:US16951940
申请日:2020-11-18
Applicant: Microsoft Technology Licensing, LLC
Inventor: Austin S. Lee , Mathew J. Lamb , Anthony James Ambrus , Amy Mun Hong , Jonathan Palmer , Sophie Stellmach
Abstract: One example provides a computing device comprising instructions executable to receive information regarding one or more entities in the scene, to receive eye tracking a plurality of eye tracking samples, each eye tracking sample corresponding to a gaze direction of a user and, based at least on the eye tracking samples, determine a time-dependent attention value for each entity of the one or more entities at different locations in a use environment, the time-dependent attention value determined using a leaky integrator. The instructions are further executable to receive a user input indicating an intent to perform a location-dependent action, associate the user input to with a selected entity based at least upon the time-dependent attention value for each entity, and perform the location-dependent action based at least upon a location of the selected entity.
-
公开(公告)号:US11107265B2
公开(公告)日:2021-08-31
申请号:US16299052
申请日:2019-03-11
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sheng Kai Tang , Julia Schwarz , Jason Michael Ray , Sophie Stellmach , Thomas Matthew Gable , Casey Leon Meekhof , Nahil Tawfik Sharkasi , Nicholas Ferianc Kamuda , Ramiro S. Torres , Kevin John Appel , Jamie Bryant Kirschenbaum
Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
-
公开(公告)号:US10852816B2
公开(公告)日:2020-12-01
申请号:US15958708
申请日:2018-04-20
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sophie Stellmach
IPC: G06F3/01 , G02B27/01 , G02B27/00 , G06F3/0485
Abstract: A method for improving user interaction with a virtual environment includes presenting a virtual environment to a user, measuring a first position of a user's gaze relative to a virtual environment, receiving a magnification input, and changing a magnification of the virtual environment centered on the first position and based on the magnification input.
-
公开(公告)号:US10831265B2
公开(公告)日:2020-11-10
申请号:US15958686
申请日:2018-04-20
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sophie Stellmach , Casey Leon Meekhof , Julia Schwarz
IPC: G06F3/01
Abstract: A method for improving user interaction with a virtual environment includes measuring a first position of the user's gaze relative to a virtual element, selecting the virtual element in the virtual environment at an origin when the user's gaze overlaps the virtual element, measuring a second position of a user's gaze relative to the virtual element, presenting a visual placeholder at a second position of the user's gaze when the second position of the user's gaze is beyond a threshold distance from the origin, and moving the visual placeholder relative to a destination using a secondary input device.
-
-
-
-
-