-
公开(公告)号:US20230341948A1
公开(公告)日:2023-10-26
申请号:US18307260
申请日:2023-04-26
Applicant: Snap Inc.
Inventor: Daniel Colascione , Daniel Harris , Andrei Rybin , Anoosh Kruba Chandar Mahalingam , Pierre-Yves Santerre , Jennica Pounds
CPC classification number: G06F3/017 , G06T19/006 , G02B27/0172 , G02B2027/0178
Abstract: An AR system includes multiple input-modalities. A hand-tracking pipeline supports Direct Manipulation of Virtual Object (DMVO) and gesture input methodologies. In addition, a voice processing pipeline provides for speech inputs. Direct memory buffer access to preliminary hand-tracking data, such as skeletal models, allows for low latency communication of the data for use by DMVO-based user interfaces. A system framework component routes higher level hand-tracking data, such as gesture identification and symbols generated based on hand positions, via a Snips protocol to gesture-based user interfaces.
-
公开(公告)号:US20230315208A1
公开(公告)日:2023-10-05
申请号:US17657911
申请日:2022-04-04
Applicant: Snap Inc.
Inventor: Sharon Moll , Piotr Gurgul , Francis Patrick Sullivan , Andrei Rybin
IPC: G06F3/01 , G06F3/0482
CPC classification number: G06F3/017 , G06F3/0482
Abstract: A head-worn device system includes one or more cameras, one or more display devices and one or more processors. The system also includes a memory storing instructions that, when executed by the one or more processors, configure the system to detect a gesture made by a user of the computing apparatus and generate gesture data identifying the gesture, select an application or selected action from a set of registered applications and actions based on the gesture data, and invoke the application or selected action.
-
公开(公告)号:US20240070995A1
公开(公告)日:2024-02-29
申请号:US17823814
申请日:2022-08-31
Applicant: Snap Inc.
CPC classification number: G06T19/006 , G06T19/20 , G06T7/246 , G06V40/28 , G06V20/20 , G06F3/017 , G02B27/0172 , G02B2027/0178 , G06T2207/20044 , G06V2201/07
Abstract: An Augmented Reality (AR) system is provided. The AR system uses a combination of gesture and DMVO methodologies to provide for the user's selection and modification of virtual object of an AR experience. The user indicates that they want to interact with a virtual object of the AR experience by moving their hand to overlap the virtual object. While keeping their hand in an overlapping position, the user rotates their wrist and the virtual object is rotated as well. To end the interaction, the user moves their hand such that their hand is no longer overlapping the virtual object.
-
公开(公告)号:US20240070994A1
公开(公告)日:2024-02-29
申请号:US17823810
申请日:2022-08-31
Applicant: Snap Inc.
CPC classification number: G06T19/006 , G02B27/0172 , G06F3/017 , G06T7/246 , G06T19/20 , G06V20/20 , G06V40/28 , G02B2027/0178 , G06T2207/20044 , G06V2201/07
Abstract: An Augmented Reality (AR) system is provided. The AR system uses a combination of gesture and DMVO methodologies to provide for the user's selection and modification of virtual objects of an AR experience. The user indicates that they want to interact with a virtual object of the AR experience by moving their hand to overlap the virtual object. While keeping their hand in an overlapping position, the user makes gestures that cause the user's viewpoint of the virtual object to either zoom in or zoom out. To end the interaction, the user moves their hand such that their hand is no longer overlapping the virtual object.
-
公开(公告)号:US20220375172A1
公开(公告)日:2022-11-24
申请号:US17742900
申请日:2022-05-12
Applicant: Snap Inc.
Inventor: David Meisenholder , Kameron Sheffield , Joseph Timothy Fortier , Raymond Zeng , Andrei Rybin , Jonathan Geddes
IPC: G06T19/00 , G06V20/20 , G06V10/74 , G06V20/60 , G06F3/0482 , G06F3/04817 , G10L15/22 , G10L15/08 , G06F3/16 , G06V10/40 , G06V10/70
Abstract: Augmented reality features are selected for presentation to a display of an electronic eyewear device by using a camera of the electronic eyewear device to capture a scan image and processing the scan image to extract contextual signals. Simultaneously, voice data from the user is captured by a microphone of the electronic eyewear device and voice-to-text conversion of the captured voice data is performed to identify keywords in the voice data. The extracted contextual signals and the identified keywords are then used to select at least one augmented reality feature that matches the extracted contextual signals and the identified keywords, and the selected augmented reality feature is presented to the display for user selection. The contextual information thus refines the search results to provide the augmented reality feature best suited for the context of the scan image captured by the electronic eyewear device.
-
-
-
-