Collaborative navigation and mapping

    公开(公告)号:US11313684B2

    公开(公告)日:2022-04-26

    申请号:US16089322

    申请日:2017-03-28

    Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information. If at least one of the extracted features matches a geo-referenced visual feature, a pose is determined for the platform device using location information associated with the matched, geo-referenced visual feature and relative motion information between consecutive frames.

    MULTI-MODAL DATA FUSION FOR ENHANCED 3D PERCEPTION FOR PLATFORMS

    公开(公告)号:US20200184718A1

    公开(公告)日:2020-06-11

    申请号:US16523313

    申请日:2019-07-26

    Abstract: A method for providing a real time, three-dimensional (3D) navigational map for platforms includes integrating at least two sources of multi-modal and multi-dimensional platform sensor information to produce a more accurate 3D navigational map. The method receives both a 3D point cloud from a first sensor on a platform with a first modality and a 2D image from a second sensor on the platform with a second modality different from the first modality, generates a semantic label and a semantic label uncertainty associated with a first space point in the 3D point cloud, generates a semantic label and a semantic label uncertainty associated with a second space point in the 2D image, and fuses the first space semantic label and the first space semantic uncertainty with the second space semantic label and the second space semantic label uncertainty to create fused 3D spatial information to enhance the 3D navigational map.

    ARTIFICIAL INTELLIGENCE-BASED HIERARCHICAL PLANNING FOR MANNED/UNMANNED PLATFORMS

    公开(公告)号:US20230394294A1

    公开(公告)日:2023-12-07

    申请号:US17151506

    申请日:2021-01-18

    CPC classification number: G06N3/092 G06N3/04

    Abstract: A method, apparatus and system for artificial intelligence-based HDRL planning and control for coordinating a team of platforms includes implementing a global planning layer for determining a collective goal and determining, by applying at least one machine learning process, at least one respective platform goal to be achieved by at least one platform, implementing a platform planning layer for determining, by applying at least one machine learning process, at least one respective action to be performed by the at least one of the platforms to achieve the respective platform goal, and implementing a platform control layer for determining at least one respective function to be performed by the at least one of the platforms. In the method, apparatus and system despite the fact that information is shared between at least two of the layers, the global planning layer, the platform planning layer, and the platform control layer are trained separately.

    RANGING-AIDED ROBOT NAVIGATION USING RANGING NODES AT UNKNOWN LOCATIONS

    公开(公告)号:US20220299592A1

    公开(公告)日:2022-09-22

    申请号:US17695784

    申请日:2022-03-15

    Abstract: A method, apparatus and system for determining change in pose of a mobile device include determining from first ranging information received at a first and a second receiver on the mobile device from a stationary node during a first time instance, a distance from the stationary node to the first receiver and the second receiver, determining from second ranging information received at the first receiver and the second receiver from the stationary node during a second time instance, a distance from the stationary node to the first receiver and second receiver, and determining from the determined distances during the first time instance and the second time instance, how far and in which direction the first receiver and the second receiver moved between the first time instance and the second time instance to determine a change in pose of the mobile device, where a position of the stationary node is unknown.

    REGION METRICS FOR CLASS BALANCING IN MACHINE LEARNING SYSTEMS

    公开(公告)号:US20220092366A1

    公开(公告)日:2022-03-24

    申请号:US17478177

    申请日:2021-09-17

    Abstract: Techniques are disclosed for an image understanding system comprising a machine learning system that applies a machine learning model to perform image understanding of each pixel of an image, the pixel labeled with a class, to determine an estimated class to which the pixel belongs. The machine learning system determines, based on the classes with which the pixels are labeled and the estimated classes, a cross entropy loss of each class. The machine learning system determines, based on one or more region metrics, a weight for each class and applies the weight to the cross entropy loss of each class to obtain a weighted cross entropy loss. The machine learning system updates the machine learning model with the weighted cross entropy loss to improve a performance metric of the machine learning model for each class.

    Multi-modal data fusion for enhanced 3D perception for platforms

    公开(公告)号:US10991156B2

    公开(公告)日:2021-04-27

    申请号:US16523313

    申请日:2019-07-26

    Abstract: A method for providing a real time, three-dimensional (3D) navigational map for platforms includes integrating at least two sources of multi-modal and multi-dimensional platform sensor information to produce a more accurate 3D navigational map. The method receives both a 3D point cloud from a first sensor on a platform with a first modality and a 2D image from a second sensor on the platform with a second modality different from the first modality, generates a semantic label and a semantic label uncertainty associated with a first space point in the 3D point cloud, generates a semantic label and a semantic label uncertainty associated with a second space point in the 2D image, and fuses the first space semantic label and the first space semantic uncertainty with the second space semantic label and the second space semantic label uncertainty to create fused 3D spatial information to enhance the 3D navigational map.

    Semantic visual landmarks for navigation

    公开(公告)号:US10929713B2

    公开(公告)日:2021-02-23

    申请号:US16163273

    申请日:2018-10-17

    Abstract: Techniques are disclosed for improving navigation accuracy for a mobile platform. In one example, a navigation system comprises an image sensor that generates a plurality of images, each image comprising one or more features. A computation engine executing on one or more processors of the navigation system processes each image of the plurality of images to determine a semantic class of each feature of the one or more features of the image. The computation engine determines, for each feature of the one or more features of each image and based on the semantic class of the feature, whether to include the feature as a constraint in a navigation inference engine. The computation engine generates, based at least on features of the one or more features included as constraints in the navigation inference engine, navigation information. The computation engine outputs the navigation information to improve navigation accuracy for the mobile platform.

Patent Agency Ranking