-
公开(公告)号:US20230217205A1
公开(公告)日:2023-07-06
申请号:US18181920
申请日:2023-03-10
Applicant: Magic Leap, Inc.
Inventor: Colby Nelson LEIDER , Justin Dan MATHEW , Michael Z. LAND , Blaine Ivin WOOD , Jung-Suk LEE , Anastasia Andreyevna TAJIK , Jean-Marc JOT
IPC: A63F13/577 , A63F13/285
CPC classification number: A63F13/577 , A63F13/285 , A63F2300/8082
Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device. In accordance with a determination that the one or more audio models does not comprise an audio model corresponding to the first object, an acoustic property of the first object can be determined, a custom audio model based on the acoustic property of the first object can be generated, an audio signal can be synthesized, wherein the audio signal is based on the collision and the custom audio model, and the audio signal can be presented, via a speaker of a head-wearable device, to a user.
-
公开(公告)号:US20240284138A1
公开(公告)日:2024-08-22
申请号:US18649778
申请日:2024-04-29
Applicant: Magic Leap, Inc.
Inventor: Colby Nelson LEIDER , Justin Dan MATHEW , Michael Z. LAND , Blaine Ivin WOOD , Jung-Suk LEE , Anastasia Andreyevna TAJIK , Jean-Marc JOT
IPC: H04S7/00 , A63F13/285 , A63F13/577 , G06F3/16 , G06T19/00
CPC classification number: H04S7/303 , A63F13/285 , A63F13/577 , G06F3/165 , G06T19/006 , A63F2300/8082
Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device. In accordance with a determination that the one or more audio models does not comprise an audio model corresponding to the first object, an acoustic property of the first object can be determined, a custom audio model based on the acoustic property of the first object can be generated, an audio signal can be synthesized, wherein the audio signal is based on the collision and the custom audio model, and the audio signal can be presented, via a speaker of a head-wearable device, to a user.
-
公开(公告)号:US20220038840A1
公开(公告)日:2022-02-03
申请号:US17401090
申请日:2021-08-12
Applicant: Magic Leap, Inc.
Inventor: Remi Samuel AUDFRAY , Jean-Marc JOT , Samuel Charles DICKER , Mark Brandon HERTENSTEINER , Justin Dan MATHEW , Anastasia Andreyevna TAJIK , Nicholas John LaMARTINA
Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. An acoustic axis corresponding to the audio signal is determined. For each of a respective left and right ear of the user, an angle between the acoustic axis and the respective ear is determined. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. The virtual speaker array includes a plurality of virtual speaker positions, each virtual speaker position of the plurality located on the surface of a sphere concentric with the user's head, the sphere having a first radius. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; a source radiation filter is determined based on the determined angle; the audio signal is processed to generate an output audio signal for the respective ear; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF and the source radiation filter to the audio signal.
-
公开(公告)号:US20240414492A1
公开(公告)日:2024-12-12
申请号:US18703302
申请日:2022-10-18
Applicant: Magic Leap, Inc.
Inventor: Mark Brandon HERTENSTEINER , Justin Dan MATHEW , Remi Samuel AUDFRAY , Jean-Marc JOT , Benjamin Thomas VONDERSAAR , Michael Z. LAND
IPC: H04S7/00 , G06F3/16 , G06T19/00 , G10L21/0208
Abstract: This disclosure relates in general to augmented reality (AR), mixed reality (MR), or extended reality (XR) environmental mapping. Specifically, this disclosure relates to AR, MR, or XR audio mapping in an AR, MR, or XR environment. In some embodiments, the disclosed systems and methods allow the environment to be mapped based on a recording. In some embodiments, the audio mapping information is associated to voxels located in the environment.
-
公开(公告)号:US20240357311A1
公开(公告)日:2024-10-24
申请号:US18761089
申请日:2024-07-01
Applicant: Magic Leap, Inc.
Inventor: Remi Samuel AUDFRAY , Jean-Marc JOT , Samuel Charles DICKER , Mark Brandon HERTENSTEINER , Justin Dan MATHEW , Anastasia Andreyevna TAJIK , Nicholas John LAMARTINA
CPC classification number: H04S7/304 , H04R5/033 , H04R5/04 , H04S3/008 , H04S2400/01 , H04S2400/11 , H04S2420/01
Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.
-
公开(公告)号:US20230396947A1
公开(公告)日:2023-12-07
申请号:US18451794
申请日:2023-08-17
Applicant: Magic Leap, Inc.
Inventor: Remi Samuel AUDFRAY , Jean-Marc JOT , Samuel Charles DICKER , Mark Brandon HERTENSTEINER , Justin Dan MATHEW , Anastasia Andreyevna TAJIK , Nicholas John LaMARTINA
CPC classification number: H04S7/304 , H04R5/033 , H04R5/04 , H04S3/008 , H04S2400/11 , H04S2400/01 , H04S2420/01
Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.
-
公开(公告)号:US20230094733A1
公开(公告)日:2023-03-30
申请号:US18061367
申请日:2022-12-02
Applicant: Magic Leap, Inc.
Inventor: Remi Samuel AUDFRAY , Jean-Marc JOT , Samuel Charles DICKER , Mark Brandon HERTENSTEINER , Justin Dan MATHEW , Anastasia Andreyevna TAJIK , Nicholas John LaMARTINA
Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.
-
公开(公告)号:US20210195360A1
公开(公告)日:2021-06-24
申请号:US17127204
申请日:2020-12-18
Applicant: Magic Leap, Inc.
Inventor: Colby Nelson LEIDER , Justin Dan MATHEW , Michael Z. LAND , Blaine Ivin WOOD , Jung-Suk LEE , Anastasia Andreyevna TAJIK , Jean-Marc JOT
Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device. In accordance with a determination that the one or more audio models does not comprise an audio model corresponding to the first object, an acoustic property of the first object can be determined, a custom audio model based on the acoustic property of the first object can be generated, an audio signal can be synthesized, wherein the audio signal is based on the collision and the custom audio model, and the audio signal can be presented, via a speaker of a head-wearable device, to a user.
-
-
-
-
-
-
-