RADAR AND LIDAR BASED DRIVING TECHNOLOGY
    22.
    发明公开

    公开(公告)号:US20230184931A1

    公开(公告)日:2023-06-15

    申请号:US17987200

    申请日:2022-11-15

    Applicant: TuSimple, Inc.

    CPC classification number: G01S13/931 G01S13/89 G01S17/931 G01S17/89 G01S13/865

    Abstract: Vehicles can include systems and apparatus for performing signal processing on sensor data from radar(s) and LiDAR(s) located on the vehicles. A method includes obtaining and filtering radar point cloud data of an area in an environment in which a vehicle is operating on a road to obtain filtered radar point cloud data; obtaining a light detection and ranging point cloud data of at least some of the area, where the light detection and ranging point cloud data include information about a bounding box that surrounds an object on the road; determining a set of radar point cloud data that are associated with the bounding box that surrounds the object; and causing the vehicle to operate based on one or more characteristics of the object determined from the set of radar point cloud data.

    SYSTEM AND METHOD FOR VEHICLE OCCLUSION DETECTION

    公开(公告)号:US20190272433A1

    公开(公告)日:2019-09-05

    申请号:US16416248

    申请日:2019-05-19

    Applicant: TuSimple

    Abstract: A system and method for vehicle occlusion detection is disclosed. A particular embodiment includes: receiving training image data from a training image data collection system; obtaining ground truth data corresponding to the training image data; performing a training phase to train a plurality of classifiers, a first classifier being trained for processing static images of the training image data, a second classifier being trained for processing image sequences of the training image data; receiving image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase including performing feature extraction on the image data, determining a presence of an extracted feature instance in multiple image frames of the image data by tracing the extracted feature instance back to a previous plurality of N frames relative to a current frame, applying the first trained classifier to the extracted feature instance if the extracted feature instance cannot be determined to be present in multiple image frames of the image data, and applying the second trained classifier to the extracted feature instance if the extracted feature instance can be determined to be present in multiple image frames of the image data.

    SYSTEM AND METHOD FOR ONLINE REAL-TIME MULTI-OBJECT TRACKING

    公开(公告)号:US20190266420A1

    公开(公告)日:2019-08-29

    申请号:US15906561

    申请日:2018-02-27

    Applicant: TuSimple

    Abstract: A system and method for online real-time multi-object tracking is disclosed. A particular embodiment can be configured to: receive image frame data from at least one camera associated with an autonomous vehicle; generate similarity data corresponding to a similarity between object data in a previous image frame compared with object detection results from a current image frame; use the similarity data to generate data association results corresponding to a best matching between the object data in the previous image frame and the object detection results from the current image frame; cause state transitions in finite state machines for each object according to the data association results; and provide as an output object tracking output data corresponding to the states of the finite state machines for each object.

    SYSTEM AND METHOD FOR INSTANCE-LEVEL LANE DETECTION FOR AUTONOMOUS VEHICLE CONTROL

    公开(公告)号:US20190102631A1

    公开(公告)日:2019-04-04

    申请号:US15959167

    申请日:2018-04-20

    Applicant: TuSimple

    Abstract: A system and method for instance-level lane detection for autonomous vehicle control are disclosed. A particular embodiment includes: receiving training image data from a training image data collection system; obtaining ground truth data corresponding to the training image data; performing a training phase to train a plurality of tasks associated with features of the training image data, the training phase including extracting roadway lane marking features from the training image data, causing the plurality of tasks to generate task-specific predictions based on the training image data, determining a bias between the task-specific prediction for each task and corresponding task-specific ground truth data, and adjusting parameters of each of the plurality of tasks to cause the bias to meet a pre-defined confidence level; receiving image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase including extracting roadway lane marking features from the image data, causing the plurality of trained tasks to generate instance-level lane detection results, and providing the instance-level lane detection results to an autonomous vehicle subsystem of the autonomous vehicle.

    SYSTEM AND METHOD FOR USING TRIPLET LOSS FOR PROPOSAL FREE INSTANCE-WISE SEMANTIC SEGMENTATION FOR LANE DETECTION

    公开(公告)号:US20190065867A1

    公开(公告)日:2019-02-28

    申请号:US15684791

    申请日:2017-08-23

    Applicant: TuSimple

    Abstract: A system and method for using triplet loss for proposal free instance-wise semantic segmentation for lane detection are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on an autonomous vehicle; performing a semantic segmentation operation or other object detection on the received image data to identify and label objects in the image data with object category labels on a per-pixel basis and producing corresponding semantic segmentation prediction data; performing a triplet loss calculation operation using the semantic segmentation prediction data to identify different instances of objects with similar object category labels found in the image data; and determining an appropriate vehicle control action for the autonomous vehicle based on the different instances of objects identified in the image data.

    SYSTEM AND METHOD FOR IMAGE LOCALIZATION BASED ON SEMANTIC SEGMENTATION

    公开(公告)号:US20180336421A1

    公开(公告)日:2018-11-22

    申请号:US15598727

    申请日:2017-05-18

    Applicant: TuSimple

    Abstract: A system and method for image localization based on semantic segmentation are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on an autonomous vehicle; performing semantic segmentation or other object detection on the received image data to identify and label objects in the image data and produce semantic label image data; identifying extraneous objects in the semantic label image data; removing the extraneous objects from the semantic label image data; comparing the semantic label image data to a baseline semantic label map; and determining a vehicle location of the autonomous vehicle based on information in a matching baseline semantic label map.

    SYSTEM AND METHOD FOR TRANSITIONING BETWEEN AN AUTONOMOUS AND MANUAL DRIVING MODE BASED ON DETECTION OF A DRIVERS CAPACITY TO CONTROL A VEHICLE

    公开(公告)号:US20180290660A1

    公开(公告)日:2018-10-11

    申请号:US15482624

    申请日:2017-04-07

    Applicant: TuSimple

    Abstract: A system and method for transitioning between an autonomous and manual driving mode based on detection of a driver's capacity to control a vehicle are disclosed. A particular embodiment includes: receiving sensor data related to a vehicle driver's capacity to take manual control of an autonomous vehicle; determining, based on the sensor data, if the driver has the capacity to take manual control of the autonomous vehicle, the determining including prompting the driver to perform an action or provide an input; and outputting a vehicle control transition signal to a vehicle subsystem to cause the vehicle subsystem to take action based on the driver's capacity to take manual control of the autonomous vehicle.

    VEHICLE ULTRASONIC SENSORS
    29.
    发明申请

    公开(公告)号:US20250050913A1

    公开(公告)日:2025-02-13

    申请号:US18486809

    申请日:2023-10-13

    Applicant: TuSimple, Inc.

    Abstract: Techniques are described for operating a vehicle using sensor data provided by one or more ultrasonic sensors located on or in the vehicle. An example method includes receiving, by a computer located in a vehicle, data from an ultrasonic sensor located on the vehicle, where the data includes a first set of coordinates of two points associated with a location where an object is detected by the ultrasonic sensor; determining a second set of coordinates associated with a point in between the two points; performing a first determination that the second set of coordinates is associated with a lane or a road on which the vehicle is operating; performing a second determination that the object is movable; and sending, in response to the first determination and the second determination, a message that causes the vehicle to perform a driving related operation while the vehicle is operating on the road.

    OBJECT POSE DETERMINATION SYSTEM AND METHOD

    公开(公告)号:US20250029274A1

    公开(公告)日:2025-01-23

    申请号:US18488657

    申请日:2023-10-17

    Applicant: TuSimple, Inc.

    Abstract: The present disclosure provides methods and systems of sampling-based object pose determination. An example method includes obtaining, for a time frame, sensor data of the object acquired by a plurality of sensors; generating a two-dimensional bounding box of the object in a projection plane based on the sensor data of the time frame; generating a three-dimensional pose model of the object based on the sensor data of the time frame and a model reconstruction algorithm; generating, based on the sensor data, the pose model, and multiple sampling techniques, a plurality of pose hypotheses of the object corresponding to the time frame, generating a hypothesis projection of the object for each of the pose hypotheses by projecting the pose hypothesis onto the projection plane; determining evaluation results by comparing the hypothesis projections with the bounding box; and determining, based on the evaluation results, an object pose for the time frame.

Patent Agency Ranking