JOINT 2D AND 3D OBJECT TRACKING FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230360255A1

    公开(公告)日:2023-11-09

    申请号:US17955822

    申请日:2022-09-29

    IPC分类号: G06T7/73 G06T7/20

    摘要: In various examples, techniques for multi-dimensional tracking of objects using two-dimensional (2D) sensor data are described. Systems and methods may use first image data to determine a first 2D detected location and a first three-dimensional (3D) detected location of an object. The systems and methods may then determine a 2D estimated location using the first 2D detected location and a 3D estimated location using the first 3D detected location. The systems and methods may use second image data to determine a second 2D detected location and a second 3D detected location of a detected object, and may then determine that the object corresponds to the detected object using the 2D estimated location, the 3D estimated location, the second 2D detected location, and the second 3D detected location. The systems and method then generate, modify, delete, or otherwise update an object track that includes 2D state information and 3D state information.

    OBJECT TRACKING AND TIME-TO-COLLISION ESTIMATION FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230360232A1

    公开(公告)日:2023-11-09

    申请号:US17955827

    申请日:2022-09-29

    IPC分类号: G06T7/246

    CPC分类号: G06T7/248 G06T2207/30261

    摘要: In various examples, systems and methods for tracking objects and determining time-to-collision values associated with the objects are described. For instance, the systems and methods may use feature points associated with an object depicted in a first image and feature points associated with a second image to determine a scalar change associated with the object. The systems and methods may then use the scalar change to determine a translation associated with the object. Using the scalar change and the translation, the systems and methods may determine that the object is also depicted in the second image. The systems and methods may further use the scalar change and a temporal baseline to determine a time-to-collision associated with the object. After performing the determinations, the systems and methods may output data representing at least an identifier for the object, a location of the object, and/or the time-to-collision.

    SENSOR FUSION FOR AUTONOMOUS MACHINE APPLICATIONS USING MACHINE LEARNING

    公开(公告)号:US20210406560A1

    公开(公告)日:2021-12-30

    申请号:US17353231

    申请日:2021-06-21

    IPC分类号: G06K9/00 B60W60/00 G06T7/292

    摘要: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.

    JOINT 2D AND 3D OBJECT TRACKING FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230360231A1

    公开(公告)日:2023-11-09

    申请号:US17955814

    申请日:2022-09-29

    IPC分类号: G06T7/246

    CPC分类号: G06T7/246 G06T2207/30252

    摘要: In various examples, techniques for multi-dimensional tracking of objects using two-dimensional (2D) sensor data are described. Systems and methods may use first image data to determine a first 2D detected location and a first three-dimensional (3D) detected location of an object. The systems and methods may then determine a 2D estimated location using the first 2D detected location and a 3D estimated location using the first 3D detected location. The systems and methods may use second image data to determine a second 2D detected location and a second 3D detected location of a detected object, and may then determine that the object corresponds to the detected object using the 2D estimated location, the 3D estimated location, the second 2D detected location, and the second 3D detected location. The systems and method then generate, modify, delete, or otherwise update an object track that includes 2D state information and 3D state information.