-
公开(公告)号:US12131500B2
公开(公告)日:2024-10-29
申请号:US18455459
申请日:2023-08-24
Applicant: Magic Leap, Inc.
Inventor: Michael Janusz Woods , Andrew Rabinovich
IPC: G06T7/70 , G06F3/01 , G06F3/0346 , G06F3/0481 , G06F3/04815 , G06F3/0482 , G06F3/0483 , G06F3/0484 , G06F3/04842 , G06T7/277 , G06T7/73 , G06T19/00 , G06T3/18
CPC classification number: G06T7/74 , G06F3/011 , G06F3/012 , G06F3/013 , G06F3/014 , G06F3/0346 , G06F3/0481 , G06F3/04815 , G06F3/0482 , G06F3/0483 , G06F3/0484 , G06F3/04842 , G06T7/277 , G06T7/73 , G06T19/006 , G06T3/18 , G06T2207/10016 , G06T2207/10024
Abstract: Systems and methods for reducing error from noisy data received from a high frequency sensor by fusing received input with data received from a low frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, and adjusting a propagation path of a second set of dynamic inputs from the high frequency sensor based on the correction input point either by full translation to the correction input point or dampened approach towards the correction input point.
-
公开(公告)号:US11775058B2
公开(公告)日:2023-10-03
申请号:US17129669
申请日:2020-12-21
Applicant: Magic Leap, Inc.
Inventor: Vijay Badrinarayanan , Zhengyang Wu , Srivignesh Rajendran , Andrew Rabinovich
CPC classification number: G06F3/013 , G06N3/08 , G06T7/0012 , G06T7/11 , G06V10/764 , G06V10/82 , G06V40/18 , G06V40/19 , G06T2207/20081 , G06T2207/20084 , G06T2207/30041
Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.
-
公开(公告)号:US11410392B2
公开(公告)日:2022-08-09
申请号:US16986935
申请日:2020-08-06
Applicant: Magic Leap, Inc.
Inventor: Eric C. Browy , Michael Janusz Woods , Andrew Rabinovich
IPC: G06T19/00 , G02B27/01 , G06F3/01 , G06F1/16 , G06F3/03 , G06F40/58 , G06V10/10 , G06V20/20 , G06V40/20 , G06F3/16 , G06T7/70 , G06V30/10
Abstract: A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
-
公开(公告)号:US11315325B2
公开(公告)日:2022-04-26
申请号:US16596610
申请日:2019-10-08
Applicant: Magic Leap, Inc.
Inventor: Andrew Rabinovich , John Monos
Abstract: Examples of the disclosure describe systems and methods for generating and displaying a virtual companion. In an example method, a first input from an environment of a user is received at a first time via a first sensor on a head-wearable device. An occurrence of an event in the environment is determined based on the first input. A second input from the user is received via a second sensor on the head-wearable device, and an emotional reaction of the user is identified based on the second input. An association is determined between the emotional reaction and the event. A view of the environment is presented at a second time later than the first time via a see-through display of the head-wearable device. A stimulus is presented at the second time via a virtual companion displayed via the see-through display, wherein the stimulus is determined based on the determined association between the emotional reaction and the event.
-
公开(公告)号:US11210808B2
公开(公告)日:2021-12-28
申请号:US16833093
申请日:2020-03-27
Applicant: Magic Leap, Inc.
Inventor: Michael Janusz Woods , Andrew Rabinovich
IPC: G06T7/70 , G06T7/73 , G06T19/00 , G06T7/277 , G06F3/01 , G06F3/0483 , G06F3/0482 , G06F3/0484 , G06F3/0481 , G06F3/0346 , G06T3/00
Abstract: Systems and methods for reducing error from noisy data received from a high frequency sensor by fusing received input with data received from a low frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, and adjusting a propagation path of a second set of dynamic inputs from the high frequency sensor based on the correction input point either by full translation to the correction input point or dampened approach towards the correction input point.
-
公开(公告)号:US11128854B2
公开(公告)日:2021-09-21
申请号:US16352522
申请日:2019-03-13
Applicant: Magic Leap, Inc.
Inventor: Vijay Badrinarayanan , Zhao Chen , Andrew Rabinovich , Elad Joseph
IPC: G06K9/00 , H04N13/271 , H04N13/128 , G06T7/593 , H04N13/00 , G06T19/00
Abstract: Systems and methods are disclosed for computing depth maps. One method includes capturing, using a camera, a camera image of a runtime scene. The method may also include analyzing the camera image of the runtime scene to determine a plurality of target sampling points at which to capture depth of the runtime scene. The method may further include adjusting a setting associated with a low-density depth sensor based on the plurality of target sampling points. The method may further include capturing, using the low-density depth sensor, a low-density depth map of the runtime scene at the plurality of target sampling points. The method may further include generating a computed depth map of the runtime scene based on the camera image of the runtime scene and the low-density depth map of the runtime scene.
-
公开(公告)号:US11062209B2
公开(公告)日:2021-07-13
申请号:US16588505
申请日:2019-09-30
Applicant: Magic Leap, Inc.
Inventor: Daniel DeTone , Tomasz Malisiewicz , Andrew Rabinovich
IPC: G06K9/00 , G06N3/08 , G06T7/30 , G06T3/00 , G06T7/12 , G06T7/174 , G06F17/16 , G06K9/46 , G06T3/40
Abstract: A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.
-
公开(公告)号:US20210182554A1
公开(公告)日:2021-06-17
申请号:US17129669
申请日:2020-12-21
Applicant: Magic Leap, Inc.
Inventor: Vijay Badrinarayanan , Zhengyang Wu , Srivignesh Rajendran , Andrew Rabinovich
Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.
-
公开(公告)号:US20200234051A1
公开(公告)日:2020-07-23
申请号:US16844812
申请日:2020-04-09
Applicant: Magic Leap, Inc.
Inventor: Chen-Yu Lee , Vijay Badrinarayanan , Tomasz Jan Malisiewicz , Andrew Rabinovich
Abstract: Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc.
-
公开(公告)号:US20200226785A1
公开(公告)日:2020-07-16
申请号:US16833093
申请日:2020-03-27
Applicant: Magic Leap, Inc.
Inventor: Michael Janusz Woods , Andrew Rabinovich
IPC: G06T7/73 , G06T19/00 , G06T7/277 , G06F3/01 , G06F3/0483 , G06F3/0482 , G06F3/0484 , G06F3/0481 , G06F3/0346
Abstract: Systems and methods for reducing error from noisy data received from a high frequency sensor by fusing received input with data received from a low frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, and adjusting a propagation path of a second set of dynamic inputs from the high frequency sensor based on the correction input point either by full translation to the correction input point or dampened approach towards the correction input point.
-
-
-
-
-
-
-
-
-