System and method for determining car to lane distance

    公开(公告)号:US11727811B2

    公开(公告)日:2023-08-15

    申请号:US17647163

    申请日:2022-01-05

    Applicant: TuSimple, Inc.

    Inventor: Panqu Wang

    Abstract: A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.

    SYSTEM AND METHOD FOR DETERMINING CAR TO LANE DISTANCE

    公开(公告)号:US20220130255A1

    公开(公告)日:2022-04-28

    申请号:US17647163

    申请日:2022-01-05

    Applicant: TuSimple, Inc.

    Inventor: Panqu Wang

    Abstract: A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.

    System and method for vehicle occlusion detection

    公开(公告)号:US10783381B2

    公开(公告)日:2020-09-22

    申请号:US16416248

    申请日:2019-05-19

    Applicant: TuSimple, Inc.

    Abstract: A system and method for vehicle occlusion detection is disclosed. A particular embodiment includes: receiving training image data from a training image data collection system; obtaining ground truth data corresponding to the training image data; performing a training phase to train a plurality of classifiers, a first classifier being trained for processing static images of the training image data, a second classifier being trained for processing image sequences of the training image data; receiving image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase including performing feature extraction on the image data, determining a presence of an extracted feature instance in multiple image frames of the image data by tracing the extracted feature instance back to a previous plurality of N frames relative to a current frame, applying the first trained classifier to the extracted feature instance if the extracted feature instance cannot be determined to be present in multiple image frames of the image data, and applying the second trained classifier to the extracted feature instance if the extracted feature instance can be determined to be present in multiple image frames of the image data.

    SYSTEM AND METHOD FOR LATERAL VEHICLE DETECTION

    公开(公告)号:US20200265243A1

    公开(公告)日:2020-08-20

    申请号:US16865800

    申请日:2020-05-04

    Applicant: TUSIMPLE, INC.

    Abstract: A system and method for lateral vehicle detection is disclosed. A particular embodiment can be configured to: receive lateral image data from at least one laterally-facing camera associated with an autonomous vehicle; warp the lateral image data based on a line parallel to a side of the autonomous vehicle; perform object extraction on the warped lateral image data to identify extracted objects in the warped lateral image data; and apply bounding boxes around the extracted objects.

    System and method for vehicle taillight state recognition

    公开(公告)号:US10733465B2

    公开(公告)日:2020-08-04

    申请号:US16542770

    申请日:2019-08-16

    Applicant: TuSimple, Inc.

    Inventor: Panqu Wang Tian Li

    Abstract: A system and method for taillight signal recognition using a convolutional neural network is disclosed. An example embodiment includes: receiving a plurality of image frames from one or more image-generating devices of an autonomous vehicle; using a single-frame taillight illumination status annotation dataset and a single-frame taillight mask dataset to recognize a taillight illumination status of a proximate vehicle identified in an image frame of the plurality of image frames, the single-frame taillight illumination status annotation dataset including one or more taillight illumination status conditions of a right or left vehicle taillight signal, the single-frame taillight mask dataset including annotations to isolate a taillight region of a vehicle; and using a multi-frame taillight illumination status dataset to recognize a taillight illumination status of the proximate vehicle in multiple image frames of the plurality of image frames, the multiple image frames being in temporal succession.

    System and method for online real-time multi-object tracking

    公开(公告)号:US10685244B2

    公开(公告)日:2020-06-16

    申请号:US15906561

    申请日:2018-02-27

    Applicant: TuSimple, Inc.

    Abstract: A system and method for online real-time multi-object tracking is disclosed. A particular embodiment can be configured to: receive image frame data from at least one camera associated with an autonomous vehicle; generate similarity data corresponding to a similarity between object data in a previous image frame compared with object detection results from a current image frame; use the similarity data to generate data association results corresponding to a best matching between the object data in the previous image frame and the object detection results from the current image frame; cause state transitions in finite state machines for each object according to the data association results; and provide as an output object tracking output data corresponding to the states of the finite state machines for each object.

    System and method for determining car to lane distance

    公开(公告)号:US12073724B2

    公开(公告)日:2024-08-27

    申请号:US18335886

    申请日:2023-06-15

    Applicant: TuSimple, Inc.

    Inventor: Panqu Wang

    Abstract: A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.

    System and method for fisheye image processing

    公开(公告)号:US11935210B2

    公开(公告)日:2024-03-19

    申请号:US17018627

    申请日:2020-09-11

    Applicant: TUSIMPLE, INC.

    Abstract: A system and method for fisheye image processing can be configured to: receive fisheye image data from at least one fisheye lens camera associated with an autonomous vehicle, the fisheye image data representing at least one fisheye image frame; partition the fisheye image frame into a plurality of image portions representing portions of the fisheye image frame; warp each of the plurality of image portions to map an arc of a camera projected view into a line corresponding to a mapped target view, the mapped target view being generally orthogonal to a line between a camera center and a center of the arc of the camera projected view; combine the plurality of warped image portions to form a combined resulting fisheye image data set representing recovered or distortion-reduced fisheye image data corresponding to the fisheye image frame; generate auto-calibration data representing a correspondence between pixels in the at least one fisheye image frame and corresponding pixels in the combined resulting fisheye image data set; and provide the combined resulting fisheye image data set as an output for other autonomous vehicle subsystems.

Patent Agency Ranking