RADAR-CAMERA DEPTH FUSION FOR VEHICLE APPLICATIONS
Abstract:
This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method includes receiving a first image frame from a first camera; receiving second image frame from a second camera. The fields-of-view of the cameras partially overlap. A set of coordinates associated with pixel values of the first image frame and pixel values of the second image frame are determined. The set of coordinates correspond to an overlap of the first field-of-view and the second field-of-view. A first uncertainty window metric is determined based the set of coordinates and first uncertainty values. A second uncertainty window metric is determined based on the first uncertainty values and second uncertainty values associated with RADAR. Fused depth data is determined based on the set of coordinates, the RADAR data, and the first and second uncertainty window metrics. Other aspects and features are also claimed and described.
Information query
Patent Agency Ranking
0/0