OBJECT TRACKING AND TIME-TO-COLLISION ESTIMATION FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230360232A1

    公开(公告)日:2023-11-09

    申请号:US17955827

    申请日:2022-09-29

    CPC classification number: G06T7/248 G06T2207/30261

    Abstract: In various examples, systems and methods for tracking objects and determining time-to-collision values associated with the objects are described. For instance, the systems and methods may use feature points associated with an object depicted in a first image and feature points associated with a second image to determine a scalar change associated with the object. The systems and methods may then use the scalar change to determine a translation associated with the object. Using the scalar change and the translation, the systems and methods may determine that the object is also depicted in the second image. The systems and methods may further use the scalar change and a temporal baseline to determine a time-to-collision associated with the object. After performing the determinations, the systems and methods may output data representing at least an identifier for the object, a location of the object, and/or the time-to-collision.

    RADAR-BASED LANE CHANGE SAFETY SYSTEM
    36.
    发明公开

    公开(公告)号:US20230145218A1

    公开(公告)日:2023-05-11

    申请号:US17454338

    申请日:2021-11-10

    Abstract: In various examples, systems are described herein that may evaluate one or more radar detections against a set of filter criteria, the one or more radar detections generated using at least one sensor of a vehicle. The system may then accumulate, based at least on the evaluating, the one or more radar detections to one or energy levels that correspond to one or more locations of the one or more radar detections in a zone positioned relative to the vehicle. The system may then determine one or more safety statuses associated with the zone based at least on one or more magnitudes of the one or more energy levels. The system may transmit data, or take some other action, that causes control of the vehicle based at least on the one or more safety statuses.

    OBJECT DETECTION AND CLASSIFICATION USING LIDAR RANGE IMAGES FOR AUTONOMOUS MACHINE APPLICATIONS

    公开(公告)号:US20210063578A1

    公开(公告)日:2021-03-04

    申请号:US17005788

    申请日:2020-08-28

    Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.

Patent Agency Ranking