METHODS AND SYSTEMS FOR CAMERA TO GROUND ALIGNMENT

    公开(公告)号:US20230260157A1

    公开(公告)日:2023-08-17

    申请号:US17651407

    申请日:2022-02-16

    Abstract: Systems and methods for a vehicle are provided. In one embodiment, a method includes: receiving image data defining a plurality of images associated with an environment of the vehicle; determining, by a processor, feature points within at least one image of the plurality of images; selecting, by the processor, a subset of the feature points as ground points based on a fixed two dimensional image road mask and a three dimensional region; determining, by the processor, a ground plane based on the subset of feature points; determining, by the processor, a ground normal vector from the ground plane; determining, by the processor, a camera to ground alignment value based on the ground normal vector; and generating, by the processor, second image data based on the camera to ground alignment value.

    Online validation of LIDAR-to-LIDAR alignment and LIDAR-to-vehicle alignment

    公开(公告)号:US12174301B2

    公开(公告)日:2024-12-24

    申请号:US17350780

    申请日:2021-06-17

    Abstract: A LIDAR-to-LIDAR alignment system includes a memory and an autonomous driving module. The memory stores first and second points based on outputs of first and second LIDAR sensors. The autonomous driving module performs a validation process to determine whether alignment of the LIDAR sensors satisfy an alignment condition. The validation process includes: aggregating the first and second points in a vehicle coordinate system to provide aggregated LIDAR points; based on the aggregated LIDAR points, performing (i) a first method including determining pitch and roll differences between the first and second LIDAR sensors, (ii) a second method including determining a yaw difference between the first and second LIDAR sensors, or (iii) point cloud registration to determine rotation and translation differences between the first and second LIDAR sensors; and based on results of the first method, the second method or the point cloud registration, determining whether the alignment condition is satisfied.

    SENSOR ANGULAR ALIGNMENT
    26.
    发明申请

    公开(公告)号:US20240404107A1

    公开(公告)日:2024-12-05

    申请号:US18328862

    申请日:2023-06-05

    Abstract: Methods, apparatus, and systems for sensor alignment include acquiring a translational vector, a first calibration point location and a second calibration point location, determining an expected rotational orientation in response to the translational vector, the first calibration point location and the second calibration point location, capturing an image of the a first calibration point and the second calibration point, determining a first position of the first calibration point and a second position of the second calibration point in response to the image, calculating a calculated rotational orientation in response to the first position of the first calibration point and the second position of the second calibration point, determining a calibration value in response to the calculated rotational orientation, storing the calibration value and controlling a vehicle in response to the calibration value and a subsequent image.

    System for calibrating extrinsic parameters for a camera in an autonomous vehicle

    公开(公告)号:US12112506B2

    公开(公告)日:2024-10-08

    申请号:US17688157

    申请日:2022-03-07

    Inventor: Farui Peng Hao Yu

    CPC classification number: G06T7/80 G06T7/246 G06T7/292 G06T7/73 G06T2207/30248

    Abstract: A system for determining calibrated camera extrinsic parameters for an autonomous vehicle includes a camera mounted to the autonomous vehicle collecting image data including a plurality of image frames. The system also includes one or more automated driving controllers in electronic communication with the camera that executes instructions to determine a vehicle pose estimate based on position and movement of the autonomous vehicle by a localization algorithm. The one or more automated driving controllers determine the calibrated camera extrinsic parameters based on three dimensional coordinates for specific feature points of interests corresponding to two sequential image frames, the specific feature points of interests corresponding to the two sequential image frames, and the camera pose corresponding to the two sequential image frames.

    ROBUST LIDAR-TO-CAMERA SENSOR ALIGNMENT
    28.
    发明公开

    公开(公告)号:US20240312059A1

    公开(公告)日:2024-09-19

    申请号:US18183423

    申请日:2023-03-14

    Abstract: Method for sensor alignment including detecting a depth point cloud including a first object and a second object, generating a first control point in response to a location of the first object within the depth point cloud and a second control point in response to a location of the second object within the depth point cloud, capturing an image of a second field of view including a third object, generating a third control point in response to a location of the third object detected in response to the image, calculating a first reprojection error in response to the first control point and the third control point and a second reprojection error in response to the second control point and the third control point, generating an extrinsic parameter in response to the first reprojection error in response to the first reprojection error being less than the second reprojection error.

    VEHICLE-ONBOARD COMPUTING ARCHITECTURE FOR SENSOR ALIGNMENT

    公开(公告)号:US20240095061A1

    公开(公告)日:2024-03-21

    申请号:US17947244

    申请日:2022-09-19

    CPC classification number: G06F9/466 G06F9/544

    Abstract: A computer-implemented method for aligning a sensor to reference coordinate system includes initiating a plurality of threads, each thread executes simultaneously and independent of each other. A first thread parses data received from the sensor and stores the parsed data in a data buffer. A second thread computes an alignment transformation using the parsed data to determine alignment between the sensor and the reference coordinate system. The computing includes checking that the data buffer contains at least predetermined amount of data. If at least the predetermined amount of data exists, an intermediate result is computed using the parsed data in the data buffer; otherwise, the second thread waits for the first thread to add more data to the data buffer. The second thread outputs the intermediate result into the data buffer. A third thread outputs the alignment transformation, in response to completion of alignment computations.

Patent Agency Ranking