Perimeter estimation from posed monocular video

    公开(公告)号:US11600049B2

    公开(公告)日:2023-03-07

    申请号:US16856980

    申请日:2020-04-23

    申请人: Magic Leap, Inc.

    IPC分类号: G06T19/00 G06T7/50 G06T7/13

    摘要: Techniques for estimating a perimeter of a room environment at least partially enclosed by a set of adjoining walls using posed images are disclosed. A set of images and a set of poses are obtained. A depth map is generated based on the set of images and the set of poses. A set of wall segmentation maps are generated based on the set of images, each of the set of wall segmentation maps indicating a target region of a corresponding image that contains the set of adjoining walls. A point cloud is generated based on the depth map and the set of wall segmentation maps, the point cloud including a plurality of points that are sampled along portions of the depth map that align with the target region. The perimeter of the environment along the set of adjoining walls is estimated based on the point cloud.

    PERIOCULAR TEST FOR MIXED REALITY CALIBRATION

    公开(公告)号:US20220020192A1

    公开(公告)日:2022-01-20

    申请号:US17385724

    申请日:2021-07-26

    申请人: Magic Leap, Inc.

    摘要: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.

    Periocular test for mixed reality calibration

    公开(公告)号:US11100692B2

    公开(公告)日:2021-08-24

    申请号:US16780698

    申请日:2020-02-03

    申请人: Magic Leap, Inc.

    摘要: A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, or whether an adjustment to a rendering location of virtual object should be made to compensate for a deviation of the wearable device from its normal resting position.

    PERIMETER ESTIMATION FROM POSED MONOCULAR VIDEO

    公开(公告)号:US20200342674A1

    公开(公告)日:2020-10-29

    申请号:US16856980

    申请日:2020-04-23

    申请人: Magic Leap, Inc.

    IPC分类号: G06T19/00 G06T7/50 G06T7/13

    摘要: Techniques for estimating a perimeter of a room environment at least partially enclosed by a set of adjoining walls using posed images are disclosed. A set of images and a set of poses are obtained. A depth map is generated based on the set of images and the set of poses. A set of wall segmentation maps are generated based on the set of images, each of the set of wall segmentation maps indicating a target region of a corresponding image that contains the set of adjoining walls. A point cloud is generated based on the depth map and the set of wall segmentation maps, the point cloud including a plurality of points that are sampled along portions of the depth map that align with the target region. The perimeter of the environment along the set of adjoining walls is estimated based on the point cloud.

    Room layout estimation methods and techniques

    公开(公告)号:US10657376B2

    公开(公告)日:2020-05-19

    申请号:US15923511

    申请日:2018-03-16

    申请人: Magic Leap, Inc.

    摘要: Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc.

    Neural network for eye image segmentation and image quality estimation

    公开(公告)号:US10445881B2

    公开(公告)日:2019-10-15

    申请号:US15605567

    申请日:2017-05-25

    申请人: Magic Leap, Inc.

    摘要: Systems and methods for eye image segmentation and image quality estimation are disclosed. In one aspect, after receiving an eye image, a device such as an augmented reality device can process the eye image using a convolutional neural network with a merged architecture to generate both a segmented eye image and a quality estimation of the eye image. The segmented eye image can include a background region, a sclera region, an iris region, or a pupil region. In another aspect, a convolutional neural network with a merged architecture can be trained for eye image segmentation and image quality estimation. In yet another aspect, the device can use the segmented eye image to determine eye contours such as a pupil contour and an iris contour. The device can use the eye contours to create a polar image of the iris region for computing an iris code or biometric authentication.

    Image-enhanced depth sensing via depth sensor control

    公开(公告)号:US11128854B2

    公开(公告)日:2021-09-21

    申请号:US16352522

    申请日:2019-03-13

    申请人: Magic Leap, Inc.

    摘要: Systems and methods are disclosed for computing depth maps. One method includes capturing, using a camera, a camera image of a runtime scene. The method may also include analyzing the camera image of the runtime scene to determine a plurality of target sampling points at which to capture depth of the runtime scene. The method may further include adjusting a setting associated with a low-density depth sensor based on the plurality of target sampling points. The method may further include capturing, using the low-density depth sensor, a low-density depth map of the runtime scene at the plurality of target sampling points. The method may further include generating a computed depth map of the runtime scene based on the camera image of the runtime scene and the low-density depth map of the runtime scene.