-
公开(公告)号:US11775058B2
公开(公告)日:2023-10-03
申请号:US17129669
申请日:2020-12-21
Applicant: Magic Leap, Inc.
Inventor: Vijay Badrinarayanan , Zhengyang Wu , Srivignesh Rajendran , Andrew Rabinovich
CPC classification number: G06F3/013 , G06N3/08 , G06T7/0012 , G06T7/11 , G06V10/764 , G06V10/82 , G06V40/18 , G06V40/19 , G06T2207/20081 , G06T2207/20084 , G06T2207/30041
Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.
-
公开(公告)号:US20210182554A1
公开(公告)日:2021-06-17
申请号:US17129669
申请日:2020-12-21
Applicant: Magic Leap, Inc.
Inventor: Vijay Badrinarayanan , Zhengyang Wu , Srivignesh Rajendran , Andrew Rabinovich
Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.
-
公开(公告)号:US20170262737A1
公开(公告)日:2017-09-14
申请号:US15457990
申请日:2017-03-13
Applicant: Magic Leap, Inc.
Inventor: Andrew Rabinovich , Vijay Badrinarayanan , Daniel DeTone , Srivignesh Rajendran , Douglas Bertram Lee , Tomasz Malisiewicz
CPC classification number: G06K9/66 , G06K9/4628 , G06K9/6267 , G06K9/6272
Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
-
公开(公告)号:US11853894B2
公开(公告)日:2023-12-26
申请号:US17344758
申请日:2021-06-10
Applicant: Magic Leap, Inc.
Inventor: Andrew Rabinovich , Vijay Badrinarayanan , Srivignesh Rajendran , Chen-Yu Lee
CPC classification number: G06N3/084 , G06F18/217 , G06N3/04 , G06N3/044 , G06N3/047
Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.
-
公开(公告)号:US11803231B2
公开(公告)日:2023-10-31
申请号:US17234787
申请日:2021-04-19
Applicant: Magic Leap, Inc.
Inventor: Daniel Jürg Donatsch , Srivignesh Rajendran
CPC classification number: G06F3/011 , G06F3/012 , G06F18/214 , G06F18/217 , G06F18/2163 , G06N3/04 , G06N3/08 , G06V40/176 , G06V40/193 , G06V40/20 , G06F3/013
Abstract: Techniques are disclosed for training a machine learning model to predict user expression. A plurality of images are received, each of the plurality of images containing at least a portion of a user's face. A plurality of values for a movement metric are calculated based on the plurality of images, each of the plurality of values for the movement metric being indicative of movement of the user's face. A plurality of values for an expression unit are calculated based on the plurality of values for the movement metric, each of the plurality of values for the expression unit corresponding to an extent to which the user's face is producing the expression unit. The machine learning model is trained using the plurality of images and the plurality of values for the expression unit.
-
公开(公告)号:US20210182636A1
公开(公告)日:2021-06-17
申请号:US17183021
申请日:2021-02-23
Applicant: MAGIC LEAP, INC.
Inventor: Andrew Rabinovich , Vijay Badrinarayanan , Daniel DeTone , Srivignesh Rajendran , Douglas Bertram Lee , Tomasz Malisiewicz
Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
-
公开(公告)号:US11657286B2
公开(公告)日:2023-05-23
申请号:US17183021
申请日:2021-02-23
Applicant: MAGIC LEAP, INC.
Inventor: Andrew Rabinovich , Vijay Badrinarayanan , Daniel DeTone , Srivignesh Rajendran , Douglas Bertram Lee , Tomasz Malisiewicz
IPC: G06K9/00 , G06V30/194 , G06N3/082 , G06V10/44 , G06F18/24 , G06F18/2413 , G06N3/045
CPC classification number: G06V30/194 , G06F18/24 , G06F18/24137 , G06N3/045 , G06N3/082 , G06V10/454
Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
-
公开(公告)号:US20220244781A1
公开(公告)日:2022-08-04
申请号:US17674724
申请日:2022-02-17
Applicant: Magic Leap, Inc.
Inventor: Zhengyang Wu , Srivignesh Rajendran , Tarrence van As , Joelle Zimmermann , Vijay Badrinarayanan , Andrew Rabinovich
Abstract: Techniques related to the computation of gaze vectors of users of wearable devices are disclosed. A neural network may be trained through first and second training steps. The neural network may include a set of feature encoding layers and a plurality of sets of task-specific layers that each operate on an output of the set of feature encoding layers. During the first training step, a first image of a first eye may be provided to the neural network, eye segmentation data may be generated using the neural network, and the set of feature encoding layers may be trained. During the second training step, a second image of a second eye may be provided to the neural network, network output data may be generated using the neural network, and the plurality of sets of task-specific layers may be trained.
-
公开(公告)号:US10255529B2
公开(公告)日:2019-04-09
申请号:US15457990
申请日:2017-03-13
Applicant: Magic Leap, Inc.
Inventor: Andrew Rabinovich , Vijay Badrinarayanan , Daniel DeTone , Srivignesh Rajendran , Douglas Bertram Lee , Tomasz Malisiewicz
Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
-
公开(公告)号:US20210406609A1
公开(公告)日:2021-12-30
申请号:US17344758
申请日:2021-06-10
Applicant: Magic Leap, Inc.
Inventor: Andrew Rabinovich , Vijay Badrinarayanan , Srivignesh Rajendran , Chen-Yu Lee
Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.
-
-
-
-
-
-
-
-
-