-
公开(公告)号:US11719931B2
公开(公告)日:2023-08-08
申请号:US17876818
申请日:2022-07-29
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Jacob Knipfing
CPC classification number: G02B27/0093 , G02B27/0172 , G06F3/013 , G02B2027/0138 , G02B2027/0178 , G02B2027/0187
Abstract: Interactive augmented reality experiences with an eyewear device including a virtual eyewear beam. The user can direct the virtual beam by orienting the eyewear device or the user's eye gaze or both. The eyewear device may detect the direction of an opponent's eyewear device or eye gaze of both. The eyewear device may calculate a score based on hits of the virtual beam of the user and the opponent on respective target areas such as the other player's head or face.
-
公开(公告)号:US11670267B2
公开(公告)日:2023-06-06
申请号:US17397145
申请日:2021-08-09
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Donald Giovannini , Sana Park
CPC classification number: G10H1/0025 , G06T7/90 , G10H1/0041 , G06T2207/10024 , G06T2207/30204 , G10H2220/101 , G10H2220/445 , G10H2220/455
Abstract: Systems, devices, media, and methods are presented for playing audio sounds, such as music, on a portable electronic device using a digital color image of a note matrix on a map. A computer vision engine, in an example implementation, includes a mapping module, a color detection module, and a music playback module. The camera captures a color image of the map, including a marker and a note matrix. Based on the color image, the computer vision engine detects a token color value associated with each field. Each token color value is associated with a sound sample from a specific musical instrument. A global state map is stored in memory, including the token color value and location of each field in the note matrix. The music playback module, for each column, in order, plays the notes associated with one or more the rows, using the corresponding sound sample, according to the global state map.
-
公开(公告)号:US11450296B2
公开(公告)日:2022-09-20
申请号:US17326968
申请日:2021-05-21
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Jonathan M. Rodriguez, II , Yu Jiang Tham
Abstract: An eyewear device includes an image display and an image display driver coupled to the image display to control a presented image and adjust a brightness level setting of the presented image. The eyewear device includes a user input device including an input surface on a frame, a temple, a lateral side, or a combination thereof to receive from the wearer a user input selection. Eyewear device includes a proximity sensor to track a finger distance of a finger of the wearer to the input surface. Eyewear device controls, via the image display driver, the image display to present the image to the wearer. Eyewear device tracks, via the proximity sensor, the finger distance of the finger of the wearer to the input surface. Eyewear device adjusts, via the image display driver, the brightness level setting of the presented image on the image display based on the tracked finger distance.
-
公开(公告)号:US20220206588A1
公开(公告)日:2022-06-30
申请号:US17550679
申请日:2021-12-14
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Viktoria Hwang , Shin Hwun Kang , David Meisenholder , Daniel Moreno
IPC: G06F3/01 , G06F3/04815 , G06F3/04847 , G06V40/20 , G06V40/10 , G06V20/40 , G02B27/01 , G06T19/00
Abstract: Example systems, devices, media, and methods are described for controlling virtual elements or graphical elements on a display in response to hand gestures detected by an eyewear device that is capturing frames of video data with its camera system. An image processing system detects a series of hand shapes in the video data and determines whether it matches a predefined series of hand gestures. Each predefined series of hand gestures is associated with an action. The system controls movement of the virtual element, relative to the display, in accordance with the associated action. In an example hand shape that includes a thumb sliding along an extended finger, the system establishes a finger scale along the extended finger, calibrates a graphical scale with the finger scale, and controls movement of an interactive graphical element, such as a slider, according to the current thumb position relative to the calibrated graphical scale.
-
公开(公告)号:US20210407205A1
公开(公告)日:2021-12-30
申请号:US17362377
申请日:2021-06-29
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Shin Hwun Kang , Dmytro Kucher
IPC: G06T19/00 , H04N13/332 , G06K9/00 , G06T7/73 , H04N13/207 , H04N13/111 , G10L15/26 , G02B27/01
Abstract: Eyewear presenting text corresponding to spoken words (e.g., in speech bubbles) and optionally translating from one language to another. In one example, an interactive augmented reality experience is provided between two users of eyewear devices to allow one user of an eyewear device to share a personal attribute of the user with a second user. The personal attribute can be speech spoken by a remote second user of eyewear converted to text. The converted text can be displayed on a display of eyewear of the first user proximate the viewed second user. The personal attribute may be displayed in a speech bubble proximate the second user, such as proximate the head or mouth of the second user. The language of the spoken speech can be recognized by the second user eyewear, and translated to a language that is understood by the first user. In another example, the spoken words of a remote person is captured by the eyewear of a user, the position of the remote person is identified, the spoken word are converted to text, and the text is displayed (e.g., in a speech bubble) on an AR display of the eyewear adjacent the remote person.
-
公开(公告)号:US20200265626A1
公开(公告)日:2020-08-20
申请号:US16868315
申请日:2020-05-06
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Andrés Monroy-Hernández , Rajan Vaish
Abstract: A server machine modifies an augmented reality (AR) object in response to fulfillment of a condition. The machine provides, to a user device, object data that defines the AR object. The object data specifies a physical geolocation of the AR object, a presentation attribute of the AR object, a conditional modification program, and a trigger condition for execution of the conditional modification program. The object data causes the user device to present the AR object with a first appearance, located at the physical geolocation. The machine detects fulfillment of the trigger condition, and in response, the machine executes the conditional modification program. This modifies the object data by modifying the presentation attribute. The machine provides, to the user device, the modified object data, which causes the user device to present the AR object with a second appearance based on the modified presentation attribute.
-
公开(公告)号:US20240272432A1
公开(公告)日:2024-08-15
申请号:US18644985
申请日:2024-04-24
Applicant: Snap Inc.
Inventor: David Meisenholder , Dhritiman Sagar , Ilteris Canberk , Justin Wilder , Sumant Milind Hanumante , James Powderly
CPC classification number: G02B27/017 , G06F3/011 , G06T19/006 , G02B2027/0138 , G02B2027/0178
Abstract: Augmented reality experiences of a user wearing an electronic eyewear device are captured by at least one camera on a frame of the electronic eyewear device, the at least one camera having a field of view that is larger than a field of view of a display of the electronic eyewear device. An augmented reality feature or object is applied to the captured scene. A photo or video of the augmented reality scene is captured and a first portion of the captured photo or video is displayed in the display. The display is adjusted to display a second portion of the captured photo or video with the augmented reality features as the user moves the user's head to view the second portion of the captured photo or video. The captured photo or video may be transferred to another device for viewing the larger field of view augmented reality image.
-
公开(公告)号:US11982808B2
公开(公告)日:2024-05-14
申请号:US17744880
申请日:2022-05-16
Applicant: Snap Inc.
Inventor: David Meisenholder , Dhritiman Sagar , Ilteris Canberk , Justin Wilder , Sumant Milind Hanumante , James Powderly
CPC classification number: G02B27/017 , G06F3/011 , G06T19/006 , G02B2027/0138 , G02B2027/0178
Abstract: Augmented reality experiences of a user wearing an electronic eyewear device are captured by at least one camera on a frame of the electronic eyewear device, the at least one camera having a field of view that is larger than a field of view of a display of the electronic eyewear device. An augmented reality feature or object is applied to the captured scene. A photo or video of the augmented reality scene is captured and a first portion of the captured photo or video is displayed in the display. The display is adjusted to display a second portion of the captured photo or video with the augmented reality features as the user moves the user's head to view the second portion of the captured photo or video. The captured photo or video may be transferred to another device for viewing the larger field of view augmented reality image.
-
公开(公告)号:US20240144611A1
公开(公告)日:2024-05-02
申请号:US18391879
申请日:2023-12-21
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Shin Hwun Kang , Dmytro Kucher
IPC: G06T19/00 , G02B27/01 , G06T7/73 , G06V40/10 , G10L15/26 , H04N13/111 , H04N13/207 , H04N13/332
CPC classification number: G06T19/006 , G02B27/0172 , G06T7/73 , G06V40/10 , G10L15/26 , H04N13/111 , H04N13/207 , H04N13/332 , G02B2027/0138 , G02B2027/0178 , G06T2207/30196 , H04N2213/008
Abstract: Eyewear presenting text corresponding to spoken words (e.g., in speech bubbles) and optionally translating from one language to another. In one example, an interactive augmented reality experience is provided between two users of eyewear devices to allow one user of an eyewear device to share a personal attribute of the user with a second user. The personal attribute can be speech spoken by a remote second user of eyewear converted to text. The converted text can be displayed on a display of eyewear of the first user proximate the viewed second user. The personal attribute may be displayed in a speech bubble proximate the second user, such as proximate the head or mouth of the second user. The language of the spoken speech can be recognized by the second user eyewear, and translated to a language that is understood by the first user. In another example, the spoken words of a remote person is captured by the eyewear of a user, the position of the remote person is identified, the spoken word are converted to text, and the text is displayed (e.g., in a speech bubble) on an AR display of the eyewear adjacent the remote person.
-
公开(公告)号:US20240107256A1
公开(公告)日:2024-03-28
申请号:US18532679
申请日:2023-12-07
Applicant: Snap Inc.
Inventor: Ilteris Canberk , Shin Hwun Kang
CPC classification number: H04S7/303 , G02B27/017 , G02B2027/0138 , G02B2027/0178 , H04S2400/11
Abstract: Devices, media, and methods are presented for an immersive augmented reality (AR) experience using an eyewear device with spatial audio. The eyewear device has a processor, a memory, an image sensor, and a speaker system. The eyewear device captures image information for an environment surrounding the device and identifies an object location within the same environment. The eyewear device then associates a virtual object with the identified object location. The eyewear device monitors the position of the device with respect to the virtual object and presents audio signals to alert the user that the identified object is in the environment.
-
-
-
-
-
-
-
-
-