-
公开(公告)号:US12061842B2
公开(公告)日:2024-08-13
申请号:US17657840
申请日:2022-04-04
Applicant: Snap Inc.
Inventor: Ilteris Kaan Canberk , Shin Hwun Kang
CPC classification number: G06F3/167 , G06T3/40 , G06T7/70 , G06T19/006 , G10L15/22 , G10L2015/223 , G10L2015/228
Abstract: Disclosed are systems and methods for voice-based control of augmented reality (AR) objects on a wearable device. The systems and methods perform operations comprising: instructing a display element of the AR wearable device to present a visual indicator representing a cursor; receiving voice input representing a first virtual object; determining a real-world position within a real-world environment being viewed through the AR wearable device based on a current position of the visual indicator; and instructing the display element of the AR wearable device to present the first virtual object within the real-world environment at the real-world position.
-
公开(公告)号:US20240202470A1
公开(公告)日:2024-06-20
申请号:US18082969
申请日:2022-12-16
Applicant: Snap Inc.
Inventor: Ilteris Kaan Canberk , Shin Hwun Kang
CPC classification number: G06F40/58 , G06F3/0488 , G06T7/73 , G06T11/00 , G06V20/20 , G10L13/02 , G06T2200/24
Abstract: An augmented reality (AR) translation system is provided. The AR translation system may analyze camera data to determine objects included in a field of view of a camera of a user device. Augmented reality content may be provided that includes a visual translation of an object included in the field of view from a primary language of the user to an additional language. An audible version of the translation may also be provided as part of the augmented reality content. Users may also add an object in the field of view to a listing of translated objects associated with the user based on at least one of touch input, audio input, or gesture input.
-
公开(公告)号:US20240193875A1
公开(公告)日:2024-06-13
申请号:US18078522
申请日:2022-12-09
Applicant: Snap Inc.
Inventor: Ilteris Kaan Canberk , Bernhard Jung , Shin Hwun Kang , Daria Skrypnyk
IPC: G06T19/00 , H04L67/131
CPC classification number: G06T19/006 , H04L67/131
Abstract: Systems, methods, and computer readable media for an augmented reality (AR) shared screen space. Examples relate to a host augmented realty (AR) device sharing a screen and a relative location of the AR device to the screen with guest AR devices where the guest AR devices share a relative location of the guest AR devices to a copy of the screen displayed on the display of the guest AR devices and where the users of the AR devices may see each other's location with the use of avatars around the shared screen and add augmentations to the shared screen. The yaw, roll, and pitch of the head of the avatars tracks the movement of the head of the user of the AR wearable device.
-
公开(公告)号:US20240144611A1
公开(公告)日:2024-05-02
申请号:US18391879
申请日:2023-12-21
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Shin Hwun Kang , Dmytro Kucher
IPC: G06T19/00 , G02B27/01 , G06T7/73 , G06V40/10 , G10L15/26 , H04N13/111 , H04N13/207 , H04N13/332
CPC classification number: G06T19/006 , G02B27/0172 , G06T7/73 , G06V40/10 , G10L15/26 , H04N13/111 , H04N13/207 , H04N13/332 , G02B2027/0138 , G02B2027/0178 , G06T2207/30196 , H04N2213/008
Abstract: Eyewear presenting text corresponding to spoken words (e.g., in speech bubbles) and optionally translating from one language to another. In one example, an interactive augmented reality experience is provided between two users of eyewear devices to allow one user of an eyewear device to share a personal attribute of the user with a second user. The personal attribute can be speech spoken by a remote second user of eyewear converted to text. The converted text can be displayed on a display of eyewear of the first user proximate the viewed second user. The personal attribute may be displayed in a speech bubble proximate the second user, such as proximate the head or mouth of the second user. The language of the spoken speech can be recognized by the second user eyewear, and translated to a language that is understood by the first user. In another example, the spoken words of a remote person is captured by the eyewear of a user, the position of the remote person is identified, the spoken word are converted to text, and the text is displayed (e.g., in a speech bubble) on an AR display of the eyewear adjacent the remote person.
-
公开(公告)号:US20240107256A1
公开(公告)日:2024-03-28
申请号:US18532679
申请日:2023-12-07
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Shin Hwun Kang
CPC classification number: H04S7/303 , G02B27/017 , G02B2027/0138 , G02B2027/0178 , H04S2400/11
Abstract: Devices, media, and methods are presented for an immersive augmented reality (AR) experience using an eyewear device with spatial audio. The eyewear device has a processor, a memory, an image sensor, and a speaker system. The eyewear device captures image information for an environment surrounding the device and identifies an object location within the same environment. The eyewear device then associates a virtual object with the identified object location. The eyewear device monitors the position of the device with respect to the virtual object and presents audio signals to alert the user that the identified object is in the environment.
-
公开(公告)号:US11934570B2
公开(公告)日:2024-03-19
申请号:US17988526
申请日:2022-11-16
Applicant: Snap Inc.
Inventor: Shin Hwun Kang , Ilteris Canberk , James Powderly , Dmytro Kucher , Dmytro Hovorov
CPC classification number: G06F3/011 , A63F13/46 , A63F13/577 , G02B27/0101 , G02B27/0172 , G02B27/0179 , G06F3/167 , G06T7/73 , A63F2300/8082 , G02B2027/0138 , G02B2027/0178 , G02B2027/0187 , G06T2207/30208
Abstract: Interactive augmented reality experiences with an eyewear device including a position detection system and a display system. The eyewear device registers a first marker position for a user-controlled virtual game piece and a second marker for an interaction virtual game piece. The eyewear device monitors its position (e.g., location and orientation) and updates the position of the user-controlled virtual game piece accordingly. The eyewear device additionally monitors the position of the user-controlled virtual game piece with respect to the interaction virtual game piece for use in generating a score. Augmented reality examples include a “spheroidal balancing” augmented reality experience and a “spheroidal balancing” augmented reality experience.
-
公开(公告)号:US11928306B2
公开(公告)日:2024-03-12
申请号:US17448169
申请日:2021-09-20
Applicant: Snap Inc.
Inventor: Karen Stolzenberg , David Meisenholder , Mathieu Emmanuel Vignau , Joseph Timothy Fortier , Kaveh Anvaripour , Daniel Moreno , Kyle Goodrich , Ilteris Kaan Canberk , Shin Hwun Kang
IPC: G06F3/04815 , G02B27/01 , G06F3/04817 , G06F3/0485 , G06F3/0488 , G06T19/00
CPC classification number: G06F3/04815 , G02B27/0101 , G02B27/0172 , G06F3/04817 , G06F3/0485 , G06F3/0488 , G06T19/006 , G02B2027/0138 , G02B2027/0178
Abstract: Disclosed is a method of receiving and processing navigation inputs executed by one or more processors in a head-worn device system including one or more display devices, one or more cameras and a generally vertically-arranged touchpad. The method comprises displaying a first carousel of AR effects icons, receiving a first horizontal input on the touchpad, rotating the first carousel of AR effects icons in response to first horizontal input, receiving a first touch input on the touchpad to select a particular AR effects icon that is in a selection position in the first carousel, displaying a scene viewed by the one or more cameras, the scene being enhanced with AR effects corresponding to the particular AR effects icon, receiving content capture user input, and in response to the content capture user input, capturing a new content item corresponding to the scene.
-
公开(公告)号:US11914770B2
公开(公告)日:2024-02-27
申请号:US18098384
申请日:2023-01-18
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Shin Hwun Kang , Dmytro Kucher
CPC classification number: G06F3/013 , G02B27/0093 , G02C11/10 , G06F3/1454
Abstract: Eyewear providing an interactive augmented reality experience between two users of eyewear devices to perform a shared group object manipulation task. During the shared group task, each user of the eyewear controls movement of a respective virtual object in a virtual scene based on a portion of the virtual scene the user is gazing at. Each user can also generate a verbal command to generate a virtual object that interacts with one or more of the other virtual objects.
-
公开(公告)号:US11869156B2
公开(公告)日:2024-01-09
申请号:US17362377
申请日:2021-06-29
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Shin Hwun Kang , Dmytro Kucher
IPC: G06T19/00 , G06T7/73 , H04N13/207 , H04N13/332 , H04N13/111 , G06V40/10 , G02B27/01 , G10L15/26
CPC classification number: G06T19/006 , G02B27/0172 , G06T7/73 , G06V40/10 , G10L15/26 , H04N13/111 , H04N13/207 , H04N13/332 , G02B2027/0138 , G02B2027/0178 , G06T2207/30196 , H04N2213/008
Abstract: Eyewear presenting text corresponding to spoken words (e.g., in speech bubbles) and optionally translating from one language to another. In one example, an interactive augmented reality experience is provided between two users of eyewear devices to allow one user of an eyewear device to share a personal attribute of the user with a second user. The personal attribute can be speech spoken by a remote second user of eyewear converted to text. The converted text can be displayed on a display of eyewear of the first user proximate the viewed second user. The personal attribute may be displayed in a speech bubble proximate the second user, such as proximate the head or mouth of the second user. The language of the spoken speech can be recognized by the second user eyewear, and translated to a language that is understood by the first user. In another example, the spoken words of a remote person is captured by the eyewear of a user, the position of the remote person is identified, the spoken word are converted to text, and the text is displayed (e.g., in a speech bubble) on an AR display of the eyewear adjacent the remote person.
-
公开(公告)号:US11863596B2
公开(公告)日:2024-01-02
申请号:US17544496
申请日:2021-12-07
Applicant: Snap Inc.
Inventor: Kristian Bauer , Tiago Rafael Duarte , Terek Judi , Shin Hwun Kang , Karen Stolzenberg
IPC: G06F13/00 , H04L65/1069 , G06F3/14 , G02B27/01
CPC classification number: H04L65/1069 , G02B27/0172 , G06F3/1454 , G02B2027/0138 , G02B2027/0178
Abstract: A head-worn device system includes one or more cameras, one or more display devices and one or more processors. The system also includes a memory storing instructions that, when executed by the one or more processors, configure the system to perform operations to initiate or join a joint visual computing session. The method may comprise receiving user input to initiate a joint session of a visual computing experience, monitoring for short-range data transmissions including data indicating the existence of a current session of the visual computing experience, and based on determining that a current session is in process, providing a user input option to join the current session of the visual computing experience.
-
-
-
-
-
-
-
-
-