Abstract:
A vehicle system and method that can determine object sensor misalignment while a host vehicle is being driven, and can do so within a single sensor cycle through the use of stationary and moving target objects and does not require multiple sensors with overlapping fields of view. In an exemplary embodiment where the host vehicle is traveling in a generally straight line, one or more object misalignment angle(s) αo between an object axis and a sensor axis are calculated and used to determine the actual sensor misalignment angle α.
Abstract:
A flexible, printable antenna for automotive radar. The antenna can be printed onto a thin, flexible substrate, and thus can be bent to conform to a vehicle body surface with compound curvature. The antenna can be mounted to the interior of a body surface such as a bumper fascia, where it cannot be seen but can transmit radar signals afield. The antenna can also be mounted to and blended into the exterior of an inconspicuous body surface, or can be made transparent and mounted to the interior or exterior of a glass surface. The antenna includes an artificial impedance surface which is tailored based on the three-dimensional shape of the surface to which the antenna is mounted and the desired radar wave pattern. The antenna can be used for automotive collision avoidance applications using 22-29 GHz or 76-81 GHz radar, and has a large aperture to support high angular resolution of radar data.
Abstract:
A method for determining an actual trajectory of a vehicle using object detection data and vehicle dynamics data. An object detection system identifies point objects and extended objects in proximity to the vehicle, where the point objects are less than a meter in length and width. An updated vehicle pose is calculated which optimally transposes the point objects in the scan data to a target list of previously-identified point objects. The updated vehicle pose is further refined by iteratively calculating a pose which optimally transposes the extended objects in the scan data to a target model of previously-identified extended objects, where the iteration is used to simultaneously determine a probability coefficient relating the scan data to the target model. The updated vehicle pose is used to identify the actual trajectory of the vehicle, which is compared to a planned path in a collision avoidance system.
Abstract:
A system and method for fusing the outputs from multiple LiDAR sensors on a vehicle. The method includes providing object files for objects detected by the sensors at a previous sample time, where the object files identify the position, orientation and velocity of the detected objects. The method also includes receiving a plurality of scan returns from objects detected in the field-of-view of the sensors at a current sample time and constructing a point cloud from the scan returns. The method then segments the scan points in the point cloud into predicted clusters, where each cluster initially identifies an object detected by the sensors. The method matches the predicted clusters with predicted object models generated from objects being tracked during the previous sample time. The method creates new object models, deletes dying object models and updates the object files based on the object models for the current sample time.