Abstract:
A method and system for estimating the state of health of an object sensing fusion system. Target data from a vision system and a radar system, which are used by an object sensing fusion system, are also stored in a context queue. The context queue maintains the vision and radar target data for a sequence of many frames covering a sliding window of time. The target data from the context queue are used to compute matching scores, which are indicative of how well vision targets correlate with radar targets, and vice versa. The matching scores are computed within individual frames of vision and radar data, and across a sequence of multiple frames. The matching scores are used to assess the state of health of the object sensing fusion system. If the fusion system state of health is below a certain threshold, one or more faulty sensors are identified.
Abstract:
A system and method for correcting bias and angle misalignment errors in the angle rate and acceleration outputs from a 6-DOF IMU mounted to a vehicle. The method includes providing velocity and estimation attitude data in an inertial frame from, for example, a GNSS/INS, and determining an ideal acceleration estimation and an ideal rate estimation in a vehicle frame using the velocity and attitude data. The method then determines the IMU bias error and misalignment error using the ideal acceleration and rate estimations and the angle rate and acceleration outputs in an IMU body frame from the IMU.
Abstract:
An ultra-short range radar (USRR) system of a vehicle includes an object detection module configured to, based on radar signals from USRR sensors of the vehicle: identify the presence of an object that is external to the vehicle; determine a location of the object; and determine at least one of a height, a length, and a width of the object. A remedial action module is configured to, based on the location of the object and the at least one dimension of the object, at least one of: selectively actuate an actuator of the vehicle; selectively generate an audible alert via at least one speaker of the vehicle; and selectively generate a visual alert via at least one light emitting device of the vehicle.
Abstract:
A system and method for fusing the outputs from multiple LiDAR sensors on a vehicle that includes cueing the fusion process in response to an object being detected by a radar sensor and/or a vision system. The method includes providing object files for objects detected by the LiDAR sensors at a previous sample time, where the object files identify the position, orientation and velocity of the detected objects. The method projects object models in the object files from the previous sample time to provide predicted object models. The method also includes receiving a plurality of scan returns from objects detected in the field-of-view of the sensors at a current sample time and constructing a point cloud from the scan returns. The method then segments the scan points in the point cloud into predicted scan clusters, where each cluster identifies an object detected by the sensors.
Abstract:
A system includes first and second sensors and a controller. The first sensor is of a first type and is configured to sense objects around a vehicle and to capture first data about the objects in a frame. The second sensor is of a second type and is configured to sense the objects around the vehicle and to capture second data about the objects in the frame. The controller is configured to down-sample the first and second data to generate down-sampled first and second data having a lower resolution than the first and second data. The controller is configured to identify a first set of the objects by processing the down-sampled first and second data having the lower resolution. The controller is configured to identify a second set of the objects by selectively processing the first and second data from the frame.
Abstract:
A system and method for selectively reducing or filtering data provided by one or more vehicle mounted sensors before using that data to detect, track and/or estimate a stationary object located along the side of a road, such as a guardrail or barrier. According to one example, the method reduces the amount of data by consolidating, classifying and pre-sorting data points from several forward looking radar sensors before using those data points to determine if a stationary roadside object is present. If the method determines that a stationary roadside object is present, then the reduced or filtered data points can be applied to a data fitting algorithm in order to estimate the size, shape and/or other parameters of the object. In one example, the output of the present method is provided to automated or autonomous driving systems.