SYSTEM AND METHOD FOR VEHICLE TAILLIGHT STATE RECOGNITION

    公开(公告)号:US20200334476A1

    公开(公告)日:2020-10-22

    申请号:US16916488

    申请日:2020-06-30

    Applicant: TUSIMPLE, INC.

    Inventor: Panqu WANG Tian Li

    Abstract: A system and method for taillight signal recognition using a convolutional neural network is disclosed. An example embodiment includes: receiving a plurality of image frames from one or more image-generating devices of an autonomous vehicle; using a single-frame taillight illumination status annotation dataset and a single-frame taillight mask dataset to recognize a taillight illumination status of a proximate vehicle identified in an image frame of the plurality of image frames, the single-frame taillight illumination status annotation dataset including one or more taillight illumination status conditions of a right or left vehicle taillight signal, the single-frame taillight mask dataset including annotations to isolate a taillight region of a vehicle; and using a multi-frame taillight illumination status dataset to recognize a taillight illumination status of the proximate vehicle in multiple image frames of the plurality of image frames, the multiple image frames being in temporal succession.

    DETECTION OF OBJECTS IN LIDAR POINT CLOUDS

    公开(公告)号:US20250086802A1

    公开(公告)日:2025-03-13

    申请号:US18434501

    申请日:2024-02-06

    Applicant: TuSimple, Inc.

    Abstract: A method of processing point cloud information includes converting points in a point cloud obtained from a lidar sensor into a voxel grid, generating, from the voxel grid, sparse voxel features by applying a multi-layer perceptron and one or more max pooling layers that reduce dimension of input data; applying a cascade of an encoder that performs a N-stage sparse-to-dense feature operation, a global context pooling (GCP) module, and an M-stage decoder that performs a dense-to-sparse feature generation operation. The GCP module bridges an output of a last stage of the N-stages with an input of a first stage of the M-stages, where N and M are positive integers. The GCP module comprises a multi-scale feature extractor; and performing one or more perception operations on an output of the M-stage decoder and/or an output of the GCP module.

    RADAR AND LIDAR BASED DRIVING TECHNOLOGY
    15.
    发明公开

    公开(公告)号:US20230184931A1

    公开(公告)日:2023-06-15

    申请号:US17987200

    申请日:2022-11-15

    Applicant: TuSimple, Inc.

    CPC classification number: G01S13/931 G01S13/89 G01S17/931 G01S17/89 G01S13/865

    Abstract: Vehicles can include systems and apparatus for performing signal processing on sensor data from radar(s) and LiDAR(s) located on the vehicles. A method includes obtaining and filtering radar point cloud data of an area in an environment in which a vehicle is operating on a road to obtain filtered radar point cloud data; obtaining a light detection and ranging point cloud data of at least some of the area, where the light detection and ranging point cloud data include information about a bounding box that surrounds an object on the road; determining a set of radar point cloud data that are associated with the bounding box that surrounds the object; and causing the vehicle to operate based on one or more characteristics of the object determined from the set of radar point cloud data.

    VEHICLE ULTRASONIC SENSORS
    16.
    发明申请

    公开(公告)号:US20250050913A1

    公开(公告)日:2025-02-13

    申请号:US18486809

    申请日:2023-10-13

    Applicant: TuSimple, Inc.

    Abstract: Techniques are described for operating a vehicle using sensor data provided by one or more ultrasonic sensors located on or in the vehicle. An example method includes receiving, by a computer located in a vehicle, data from an ultrasonic sensor located on the vehicle, where the data includes a first set of coordinates of two points associated with a location where an object is detected by the ultrasonic sensor; determining a second set of coordinates associated with a point in between the two points; performing a first determination that the second set of coordinates is associated with a lane or a road on which the vehicle is operating; performing a second determination that the object is movable; and sending, in response to the first determination and the second determination, a message that causes the vehicle to perform a driving related operation while the vehicle is operating on the road.

    OBJECT POSE DETERMINATION SYSTEM AND METHOD

    公开(公告)号:US20250029274A1

    公开(公告)日:2025-01-23

    申请号:US18488657

    申请日:2023-10-17

    Applicant: TuSimple, Inc.

    Abstract: The present disclosure provides methods and systems of sampling-based object pose determination. An example method includes obtaining, for a time frame, sensor data of the object acquired by a plurality of sensors; generating a two-dimensional bounding box of the object in a projection plane based on the sensor data of the time frame; generating a three-dimensional pose model of the object based on the sensor data of the time frame and a model reconstruction algorithm; generating, based on the sensor data, the pose model, and multiple sampling techniques, a plurality of pose hypotheses of the object corresponding to the time frame, generating a hypothesis projection of the object for each of the pose hypotheses by projecting the pose hypothesis onto the projection plane; determining evaluation results by comparing the hypothesis projections with the bounding box; and determining, based on the evaluation results, an object pose for the time frame.

    SYSTEM AND METHOD FOR DETERMINING CAR TO LANE DISTANCE

    公开(公告)号:US20240379004A1

    公开(公告)日:2024-11-14

    申请号:US18784692

    申请日:2024-07-25

    Applicant: TUSIMPLE, INC.

    Inventor: Panqu WANG

    Abstract: A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.

Patent Agency Ranking