JOINT 2D AND 3D OBJECT TRACKING FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230360255A1

    公开(公告)日:2023-11-09

    申请号:US17955822

    申请日:2022-09-29

    Abstract: In various examples, techniques for multi-dimensional tracking of objects using two-dimensional (2D) sensor data are described. Systems and methods may use first image data to determine a first 2D detected location and a first three-dimensional (3D) detected location of an object. The systems and methods may then determine a 2D estimated location using the first 2D detected location and a 3D estimated location using the first 3D detected location. The systems and methods may use second image data to determine a second 2D detected location and a second 3D detected location of a detected object, and may then determine that the object corresponds to the detected object using the 2D estimated location, the 3D estimated location, the second 2D detected location, and the second 3D detected location. The systems and method then generate, modify, delete, or otherwise update an object track that includes 2D state information and 3D state information.

    OBJECT TRACKING AND TIME-TO-COLLISION ESTIMATION FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230360232A1

    公开(公告)日:2023-11-09

    申请号:US17955827

    申请日:2022-09-29

    CPC classification number: G06T7/248 G06T2207/30261

    Abstract: In various examples, systems and methods for tracking objects and determining time-to-collision values associated with the objects are described. For instance, the systems and methods may use feature points associated with an object depicted in a first image and feature points associated with a second image to determine a scalar change associated with the object. The systems and methods may then use the scalar change to determine a translation associated with the object. Using the scalar change and the translation, the systems and methods may determine that the object is also depicted in the second image. The systems and methods may further use the scalar change and a temporal baseline to determine a time-to-collision associated with the object. After performing the determinations, the systems and methods may output data representing at least an identifier for the object, a location of the object, and/or the time-to-collision.

    SENSOR FUSION FOR AUTONOMOUS MACHINE APPLICATIONS USING MACHINE LEARNING

    公开(公告)号:US20210406560A1

    公开(公告)日:2021-12-30

    申请号:US17353231

    申请日:2021-06-21

    Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.

    OBJECT TRACK MANAGEMENT FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20240428596A1

    公开(公告)日:2024-12-26

    申请号:US18074708

    申请日:2022-12-05

    Abstract: In various examples, object track management for autonomous or semi-autonomous systems and applications is described herein. Systems and methods are disclosed that may limit the number of objects that are tracked based on one or more criteria. For instance, the number of objects that are tracked may be limited to a threshold number of objects when a number of detected objects exceeds a threshold. The systems and methods may use parameters associated with the detected objects to determine priority scores associated with the detected objects, and may then determine to only track the detected objects with the highest scores (e.g., high priority objects). As a result, latency and compute of the system may be reduced while still maintaining tracking with respect to safety-critical objects.

    JOINT 2D AND 3D OBJECT TRACKING FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230360231A1

    公开(公告)日:2023-11-09

    申请号:US17955814

    申请日:2022-09-29

    CPC classification number: G06T7/246 G06T2207/30252

    Abstract: In various examples, techniques for multi-dimensional tracking of objects using two-dimensional (2D) sensor data are described. Systems and methods may use first image data to determine a first 2D detected location and a first three-dimensional (3D) detected location of an object. The systems and methods may then determine a 2D estimated location using the first 2D detected location and a 3D estimated location using the first 3D detected location. The systems and methods may use second image data to determine a second 2D detected location and a second 3D detected location of a detected object, and may then determine that the object corresponds to the detected object using the 2D estimated location, the 3D estimated location, the second 2D detected location, and the second 3D detected location. The systems and method then generate, modify, delete, or otherwise update an object track that includes 2D state information and 3D state information.

Patent Agency Ranking