Particle-based hazard detection for autonomous machine

    公开(公告)号:US12235353B2

    公开(公告)日:2025-02-25

    申请号:US17454389

    申请日:2021-11-10

    Abstract: In various examples, a hazard detection system fuses outputs from multiple sensors over time to determine a probability that a stationary object or hazard exists at a location. The system may then use sensor data to calculate a detection bounding shape for detected objects and, using the bounding shape, may generate a set of particles, each including a confidence value that an object exists at a corresponding location. The system may then capture additional sensor data by one or more sensors of the ego-machine that are different from those used to capture the first sensor data. To improve the accuracy of the confidences of the particles, the system may determine a correspondence between the first sensor data and the additional sensor data (e.g., depth sensor data), which may be used to filter out a portion of the particles and improve the depth predictions corresponding to the object.

    VIEWPOINT-ADAPTIVE PERCEPTION FOR AUTONOMOUS MACHINES AND APPLICATIONS USING REAL AND SIMULATED SENSOR DATA

    公开(公告)号:US20240317263A1

    公开(公告)日:2024-09-26

    申请号:US18680378

    申请日:2024-05-31

    CPC classification number: B60W60/0015 G06N3/04

    Abstract: Systems and methods are disclosed relating to viewpoint adapted perception for autonomous machines and applications. A 3D perception network may be adapted to handle unavailable target rig data by training one or more layers of the 3D perception network as part of a training network using real source rig data and simulated source and target rig data. Feature statistics extracted from the real source data may be used to transform the features extracted from the simulated data during training. The paths for real and simulated data through the resulting network may be alternately trained on real and simulated data to update shared weights for the different paths. As such, one or more of the paths through the training network(s) may be designated as the 3D perception network, and target rig data may be applied to the 3D perception network to perform one or more perception tasks.

    SENSOR FUSION FOR AUTONOMOUS MACHINE APPLICATIONS USING MACHINE LEARNING

    公开(公告)号:US20210406560A1

    公开(公告)日:2021-12-30

    申请号:US17353231

    申请日:2021-06-21

    Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.

Patent Agency Ranking