AUTOMATIC PROPAGATION OF LABELS BETWEEN SENSOR REPRESENTATIONS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20240362935A1

    公开(公告)日:2024-10-31

    申请号:US18305185

    申请日:2023-04-21

    CPC classification number: G06V20/70 G06T7/50

    Abstract: In various examples, generating maps using first sensor data and then annotating second sensor data using the maps for autonomous systems and applications is described herein. Systems and methods are disclosed that automatically propagate annotations associated with the first sensor data generated using a first type of sensor, such as a LiDAR sensor, to the second sensor data generated using a second type of sensor, such as an image sensor(s). To propagate the annotations, the first type of sensor data may be used to generate a map, where the map represents the locations of static objects as well as the locations of dynamic objects at various instances in time. The map and annotations associated with the first sensor data may then be used to annotate the second sensor data and/or determine additional information associated with the objects represented by the second sensors data.

    GENERATING MAPS REPRESENTING DYNAMIC OBJECTS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20240353234A1

    公开(公告)日:2024-10-24

    申请号:US18305153

    申请日:2023-04-21

    CPC classification number: G01C21/3804 G06T17/00 G06V10/761 G06V2201/07

    Abstract: In various examples, generating maps using first sensor data and then annotating second sensor data using the maps for autonomous systems and applications is described herein. Systems and methods are disclosed that automatically propagate annotations associated with the first sensor data generated using a first type of sensor, such as a LiDAR sensor, to the second sensor data generated using a second type of sensor, such as an image sensor(s). To propagate the annotations, the first type of sensor data may be used to generate a map, where the map represents the locations of static objects as well as the locations of dynamic objects at various instances in time. The map and annotations associated with the first sensor data may then be used to annotate the second sensor data and/or determine additional information associated with the objects represented by the second sensors data.

    GROUND TRUTH DATA GENERATION FOR DEEP NEURAL NETWORK PERCEPTION IN AUTONOMOUS DRIVING APPLICATIONS

    公开(公告)号:US20220277193A1

    公开(公告)日:2022-09-01

    申请号:US17187350

    申请日:2021-02-26

    Abstract: An annotation pipeline may be used to produce 2D and/or 3D ground truth data for deep neural networks, such as autonomous or semi-autonomous vehicle perception networks. Initially, sensor data may be captured with different types of sensors and synchronized to align frames of sensor data that represent a similar world state. The aligned frames may be sampled and packaged into a sequence of annotation scenes to be annotated. An annotation project may be decomposed into modular tasks and encoded into a labeling tool, which assigns tasks to labelers and arranges the order of inputs using a wizard that steps through the tasks. During the tasks, each type of sensor data in an annotation scene may be simultaneously presented, and information may be projected across sensor modalities to provide useful contextual information. After all annotation tasks have been completed, the resulting ground truth data may be exported in any suitable format.

Patent Agency Ranking