DETECTION OF OBJECTS IN LIDAR POINT CLOUDS

    公开(公告)号:US20250086802A1

    公开(公告)日:2025-03-13

    申请号:US18434501

    申请日:2024-02-06

    Applicant: TuSimple, Inc.

    Abstract: A method of processing point cloud information includes converting points in a point cloud obtained from a lidar sensor into a voxel grid, generating, from the voxel grid, sparse voxel features by applying a multi-layer perceptron and one or more max pooling layers that reduce dimension of input data; applying a cascade of an encoder that performs a N-stage sparse-to-dense feature operation, a global context pooling (GCP) module, and an M-stage decoder that performs a dense-to-sparse feature generation operation. The GCP module bridges an output of a last stage of the N-stages with an input of a first stage of the M-stages, where N and M are positive integers. The GCP module comprises a multi-scale feature extractor; and performing one or more perception operations on an output of the M-stage decoder and/or an output of the GCP module.

    RADAR AND LIDAR BASED DRIVING TECHNOLOGY
    3.
    发明公开

    公开(公告)号:US20230184931A1

    公开(公告)日:2023-06-15

    申请号:US17987200

    申请日:2022-11-15

    Applicant: TuSimple, Inc.

    CPC classification number: G01S13/931 G01S13/89 G01S17/931 G01S17/89 G01S13/865

    Abstract: Vehicles can include systems and apparatus for performing signal processing on sensor data from radar(s) and LiDAR(s) located on the vehicles. A method includes obtaining and filtering radar point cloud data of an area in an environment in which a vehicle is operating on a road to obtain filtered radar point cloud data; obtaining a light detection and ranging point cloud data of at least some of the area, where the light detection and ranging point cloud data include information about a bounding box that surrounds an object on the road; determining a set of radar point cloud data that are associated with the bounding box that surrounds the object; and causing the vehicle to operate based on one or more characteristics of the object determined from the set of radar point cloud data.

    SYSTEM AND METHOD FOR ONLINE REAL-TIME MULTI-OBJECT TRACKING

    公开(公告)号:US20190266420A1

    公开(公告)日:2019-08-29

    申请号:US15906561

    申请日:2018-02-27

    Applicant: TuSimple

    Abstract: A system and method for online real-time multi-object tracking is disclosed. A particular embodiment can be configured to: receive image frame data from at least one camera associated with an autonomous vehicle; generate similarity data corresponding to a similarity between object data in a previous image frame compared with object detection results from a current image frame; use the similarity data to generate data association results corresponding to a best matching between the object data in the previous image frame and the object detection results from the current image frame; cause state transitions in finite state machines for each object according to the data association results; and provide as an output object tracking output data corresponding to the states of the finite state machines for each object.

    CAMERA ORIENTATION ESTIMATION
    8.
    发明申请

    公开(公告)号:US20210125370A1

    公开(公告)日:2021-04-29

    申请号:US16663242

    申请日:2019-10-24

    Applicant: TUSIMPLE, INC.

    Abstract: Techniques are described to estimate orientation of one or more cameras located on a vehicle. The orientation estimation technique can include obtaining an image from a camera located on a vehicle while the vehicle is being driven on a road, determining, from a terrain map, a location of a landmark located at a distance from a location of the vehicle on the road, determining, in the image, pixel locations of the landmark, selecting one pixel location from the determined pixel locations; and calculating values that describe an orientation of the camera using at least an intrinsic matrix and a previously known extrinsic matrix of the camera, where the intrinsic matrix is characterized based on at least the one pixel location and the location of the landmark.

    SYSTEM AND METHOD FOR LATERAL VEHICLE DETECTION

    公开(公告)号:US20190286916A1

    公开(公告)日:2019-09-19

    申请号:US15924249

    申请日:2018-03-18

    Applicant: TuSimple

    Abstract: A system and method for lateral vehicle detection is disclosed. A particular embodiment can be configured to: receive lateral image data from at least one laterally-facing camera associated with an autonomous vehicle; warp the lateral image data based on a line parallel to a side of the autonomous vehicle; perform object extraction on the warped lateral image data to identify extracted objects in the warped lateral image data; and apply bounding boxes around the extracted objects.

    IMAGE FUSION FOR AUTONOMOUS VEHICLE OPERATION

    公开(公告)号:US20240046654A1

    公开(公告)日:2024-02-08

    申请号:US18489306

    申请日:2023-10-18

    Applicant: TUSIMPLE, INC.

    Abstract: Devices, systems and methods for fusing scenes from real-time image feeds from on-vehicle cameras in autonomous vehicles to reduce redundancy of the information processed to enable real-time autonomous operation are described. One example of a method for improving perception in an autonomous vehicle includes receiving a plurality of cropped images, wherein each of the plurality of cropped images comprises one or more bounding boxes that correspond to one or more objects in a corresponding cropped image; identifying, based on the metadata in the plurality of cropped images, a first bounding box in a first cropped image and a second bounding box in a second cropped image, wherein the first and second bounding boxes correspond to a common object; and fusing the metadata corresponding to the common object from the first cropped image and the second cropped image to generate an output result for the common object.

Patent Agency Ranking