SEGMENTATION OF LIDAR RANGE IMAGES
    11.
    发明申请

    公开(公告)号:US20210342608A1

    公开(公告)日:2021-11-04

    申请号:US17377053

    申请日:2021-07-15

    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.

    GENERATING MAPS REPRESENTING DYNAMIC OBJECTS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20240353234A1

    公开(公告)日:2024-10-24

    申请号:US18305153

    申请日:2023-04-21

    CPC classification number: G01C21/3804 G06T17/00 G06V10/761 G06V2201/07

    Abstract: In various examples, generating maps using first sensor data and then annotating second sensor data using the maps for autonomous systems and applications is described herein. Systems and methods are disclosed that automatically propagate annotations associated with the first sensor data generated using a first type of sensor, such as a LiDAR sensor, to the second sensor data generated using a second type of sensor, such as an image sensor(s). To propagate the annotations, the first type of sensor data may be used to generate a map, where the map represents the locations of static objects as well as the locations of dynamic objects at various instances in time. The map and annotations associated with the first sensor data may then be used to annotate the second sensor data and/or determine additional information associated with the objects represented by the second sensors data.

Patent Agency Ranking