-
公开(公告)号:WO2020252371A1
公开(公告)日:2020-12-17
申请号:PCT/US2020/037573
申请日:2020-06-12
Applicant: MAGIC LEAP, INC. , CHOUDHARY, Siddharth , RAMNATH, Divya , DONG, Shiyu , MAHENDRAN, Siddarth , KANNAN, Arumugam Kalai , SINGHAL, Prateek , GUPTA, Khushi , SEKHAR, Nitesh , GANGWAR, Manushree
Inventor: CHOUDHARY, Siddharth , RAMNATH, Divya , DONG, Shiyu , MAHENDRAN, Siddarth , KANNAN, Arumugam Kalai , SINGHAL, Prateek , GUPTA, Khushi , SEKHAR, Nitesh , GANGWAR, Manushree
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scalable three-dimensional (3-D) object recognition in a cross reality system. One of the methods includes maintaining object data specifying objects that have been recognized in a scene. A stream of input images of the scene is received, including a stream of color images and a stream of depth images. A color image is provided as input to an object recognition system. A recognition output that identifies a respective object mask for each object in the color image is received. A synchronization system determines a corresponding depth image for the color image. A 3-D bounding box generation system determines a respective 3-D bounding box for each object that has been recognized in the color image. Data specifying one or more 3-D bounding boxes is received as output from the 3-D bounding box generation system.
-
公开(公告)号:WO2022026603A1
公开(公告)日:2022-02-03
申请号:PCT/US2021/043543
申请日:2021-07-28
Applicant: MAGIC LEAP, INC.
Inventor: MAHENDRAN, Siddharth , BANSAL, Nitin , SEKHAR, Nitesh , GANGWAR, Manushree , GUPTA, Khushi , SINGHAL, Prateek , VAN AS, Tarrence , RAO, Adithya Shricharan Srinivasa
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an object recognition neural network using multiple data sources. One of the methods includes receiving training data that includes a plurality of training images from a first source and images from a second source. A set of training images are obtained from the training data. For each training image in the set of training images, contrast equalization is applied to the training image to generate a modified image. The modified image is processed using the neural network to generate an object recognition output for the modified image. A loss is determined based on errors between, for each training image in the set, the object recognition output for the modified image generated from the training image and ground-truth annotation for the training image. Parameters of the neural network are updated based on the determined loss.
-
公开(公告)号:WO2020072972A1
公开(公告)日:2020-04-09
申请号:PCT/US2019/054819
申请日:2019-10-04
Applicant: MAGIC LEAP, INC. , MOHAN, Anush , TAYLOR, Robert, Blake , MIRANDA, Jeremy, Dwayne , TORRES, Rafael, Domingos , OLSHANSKY, Daniel , SHAROKNI, Ali , GUENDELMAN, Eran , KRAMER, Nick , TOSSELL, Ken , MILLER, Samuel A. , TAJIK, Jehangir , SWAMINATHAN, Ashwin , AGARWAL, Lomesh , SINGHAL, Prateek , HOLDER, Joel, David , ZHAO, Xuan , CHOUDHARY, Siddharth , SUZUKI, Helder, Toshiro , BAROT, Hiral, Honar
Inventor: MOHAN, Anush , TAYLOR, Robert, Blake , MIRANDA, Jeremy, Dwayne , TORRES, Rafael, Domingos , OLSHANSKY, Daniel , SHAHROKNI, Ali , GUENDELMAN, Eran , KRAMER, Nick , TOSSELL, Ken , MILLER, Samuel A. , TAJIK, Jehangir , SWAMINATHAN, Ashwin , AGARWAL, Lomesh , SINGHAL, Prateek , HOLDER, Joel, David , ZHAO, Xuan , CHOUDHARY, Siddharth , SUZUKI, Helder, Toshiro , BAROT, Hiral, Honar , MOORE, Christian Ivan Robert
IPC: G06T19/00 , G06F3/0482 , G06F3/0486 , G06K9/46 , G06K9/52 , G06F3/04815 , G06K9/00671 , G06K9/00765 , G06K9/4671 , G06K9/629 , G06T19/006
Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.
-
公开(公告)号:WO2020036898A1
公开(公告)日:2020-02-20
申请号:PCT/US2019/046240
申请日:2019-08-12
Applicant: MAGIC LEAP, INC.
Inventor: TAYLOR, Robert, Blake , MOHAN, Anush , MIRANDA, Jeremy, Dwayne , TORRES, Rafael, Domingos , OLSHANSKY, Daniel , SHAHROKNI, Ali , GUENDELMAN, Eran , MILLER, Samuel, A. , TAJIK, Jehangir , SWAMINATHAN, Ashwin , AGARWAL, Lomesh , SINGHAL, Prateek , HOLDER, Joel, David , ZHAO, Xuan , CHOUDHARY, Siddharth , SUZUKI, Helder, Toshiro , BAROT, Hiral, Honar , LIEBENOW, Michael, Harold
Abstract: An augmented reality viewing system is described. A local coordinate frame of local content is transformed to a world coordinate frame. A further transformation is made to a head coordinate frame and a further transformation is made to a camera coordinate frame that includes all pupil positions of an eye. One or more users may interact in separate sessions with a viewing system. If a canonical map is available, the earlier map is downloaded onto a viewing device of a user. The viewing device then generates another map and localizes the subsequent map to the canonical map.
-
公开(公告)号:WO2021263035A1
公开(公告)日:2021-12-30
申请号:PCT/US2021/038971
申请日:2021-06-24
Applicant: MAGIC LEAP, INC.
Inventor: MAHENDRAN, Siddharth , BANSAL, Nitin , SEKHAR, Nitesh , GANGWAR, Manushree , GUPTA, Khushi , SINGHAL, Prateek
IPC: G06K9/62 , G06N3/08 , G06K9/46 , G06T7/00 , G06T19/00 , G06K9/6261 , G06K9/6267 , G06N3/04 , G06T19/006 , G06T2207/20084 , G06T7/60 , G06T7/73
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object recognition neural network for amodal center prediction. One of the methods includes receiving an image of an object captured by a camera. The image of the object is processed using an object recognition neural network that is configured to generate an object recognition output. The object recognition output includes data defining a predicted two-dimensional amodal center of the object, wherein the predicted two-dimensional amodal center of the object is a projection of a predicted three-dimensional center of the object under a camera pose of the camera that captured the image.
-
6.
公开(公告)号:WO2020023582A1
公开(公告)日:2020-01-30
申请号:PCT/US2019/043154
申请日:2019-07-24
Applicant: MAGIC LEAP, INC.
Inventor: SHARMA, Divya , SHAHROKNI, Ali , MOHAN, Anush , SINGHAL, Prateek , ZHAO, Xuan , SIMA, Sergiu , LANGMANN, Benjamin
Abstract: An apparatus configured to be worn on a head of a user, includes: a screen configured to present graphics to the user; a camera system configured to view an environment in which the user is located; and a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment; wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric.
-
公开(公告)号:WO2019118886A1
公开(公告)日:2019-06-20
申请号:PCT/US2018/065771
申请日:2018-12-14
Applicant: MAGIC LEAP, INC.
Inventor: ZAHNERT, Martin Georg , FARO, Joao Antonio Pereira , VELASQUEZ, Miguel Andres Granados , KASPER, Dominik Michael , SWAMINATHAN, Ashwin , MOHAN, Anush , SINGHAL, Prateek
Abstract: To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real‑world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system.
-
-
-
-
-
-