-
公开(公告)号:US11320911B2
公开(公告)日:2022-05-03
申请号:US16296833
申请日:2019-03-08
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
Inventor: Julia Schwarz , Jason Michael Ray , Casey Leon Meekhof
IPC: G06F3/0482 , G06F3/01 , G06T19/00
Abstract: Systems and methods are provided for detecting user-object interaction in mixed-reality environments. A mixed-reality system detects a controller gesture with an associated controller orientation in the mixed-reality environment. The mixed-reality system then determines an interaction region for the controller gesture and identifies one or more virtual objects within the interaction region. The virtual objects each have an associated orientation affinity. Subsequently, the mixed-reality system determines an orientation similarity score between the controller orientation and the orientation affinity for each virtual object within the interaction region. In response to determining that at least one orientation similarity score exceeds a predetermined threshold, the mixed-reality system executes an interaction between the controller and the virtual object that has the greatest orientation similarity score.
-
公开(公告)号:US11107265B2
公开(公告)日:2021-08-31
申请号:US16299052
申请日:2019-03-11
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sheng Kai Tang , Julia Schwarz , Jason Michael Ray , Sophie Stellmach , Thomas Matthew Gable , Casey Leon Meekhof , Nahil Tawfik Sharkasi , Nicholas Ferianc Kamuda , Ramiro S. Torres , Kevin John Appel , Jamie Bryant Kirschenbaum
Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
-
公开(公告)号:US11567633B2
公开(公告)日:2023-01-31
申请号:US17170696
申请日:2021-02-08
Applicant: Microsoft Technology Licensing, LLC
Inventor: Julia Schwarz , Andrew D. Wilson , Sophie Stellmach , Erian Vazquez , Kristian Jose Davila , Adam Edwin Behringer , Jonathan Palmer , Jason Michael Ray , Mathew Julian Lamb
IPC: G06F3/0482 , G06F3/01 , G06F3/04815 , G06F3/16 , G06F3/0346
Abstract: A computer-implemented method for determining focus of a user is provided. User input is received. An intention image of a scene including a plurality of interactive objects is generated. The intention image includes pixels encoded with intention values determined based on the user input. An intention value indicates a likelihood that the user intends to focus on the pixel. An intention score is determined for each interactive object based on the intention values of pixels that correspond to the interactive object. An interactive object of the plurality of interactive objects is determined to be a focused object that has the user's focus based on the intention scores of the plurality of interactive objects.
-
公开(公告)号:US11461955B2
公开(公告)日:2022-10-04
申请号:US17445704
申请日:2021-08-23
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sheng Kai Tang , Julia Schwarz , Jason Michael Ray , Sophie Stellmach , Thomas Matthew Gable , Casey Leon Meekhof , Nahil Tawfik Sharkasi , Nicholas Ferianc Kamuda , Ramiro S. Torres , Kevin John Appel , Jamie Bryant Kirschenbaum
Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
-
公开(公告)号:US20200225757A1
公开(公告)日:2020-07-16
申请号:US16296833
申请日:2019-03-08
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
Inventor: Julia Schwarz , Jason Michael Ray , Casey Leon Meekhof
IPC: G06F3/01 , G06T19/00 , G06F3/0482
Abstract: Systems and methods are provided for detecting user-object interaction in mixed-reality environments. A mixed-reality system detects a controller gesture with an associated controller orientation in the mixed-reality environment. The mixed-reality system then determines an interaction region for the controller gesture and identifies one or more virtual objects within the interaction region. The virtual objects each have an associated orientation affinity. Subsequently, the mixed-reality system determines an orientation similarity score between the controller orientation and the orientation affinity for each virtual object within the interaction region. In response to determining that at least one orientation similarity score exceeds a predetermined threshold, the mixed-reality system executes an interaction between the controller and the virtual object that has the greatest orientation similarity score.
-
-
-
-