Abstract:
A method of determining a road surface condition for a vehicle driving on a road. Probabilities associated with a plurality of road surface conditions based on an image of a capture scene are determined by a first probability module. Probabilities associated with the plurality of road surface conditions based on vehicle operating data are determined by a second probability module. The probabilities determined by the first and second probability modules are input to a data fusion unit for fusing the probabilities and determining a road surface condition. A refined probability is output from the data fusion unit that is a function of the fused first and second probabilities. The refined probability from the data fusion unit is provided to an adaptive learning unit. The adaptive learning unit generates output commands that refine tunable parameters of at least the first probability and second probability modules for determining the respective probabilities.
Abstract:
A method of displaying a captured image on a display device of a driven vehicle. A scene exterior of the driven vehicle is captured by an at least one vision-based imaging and at least one sensing device. A time-to-collision is determined for each object detected. A comprehensive time-to-collision is determined for each object as a function of each of the determined time-to-collisions for each object. An image of the captured scene is generated by a processor. The image is dynamically expanded to include sensed objects in the image. Sensed objects are highlighted in the dynamically expanded image. The highlighted objects identifies objects proximate to the driven vehicle that are potential collisions to the driven vehicle. The dynamically expanded image with highlighted objects and associated collective time-to-collisions are displayed for each highlighted object in the display device that is determined as a potential collision.
Abstract:
A method of displaying a captured image on a display device of a driven vehicle. A scene exterior of the driven vehicle is captured by an at least one vision-based imaging device mounted on the driven vehicle. Objects in a vicinity of the driven vehicle are sensed. An image of the captured scene is generated by a processor. The image is dynamically expanded to include sensed objects in the image. The sensed objects are highlighted in the dynamically expanded image. The highlighted objects identify vehicles proximate to the driven vehicle that are potential collisions to the driven vehicle. The dynamically expanded image is displayed with highlighted objects in the display device.
Abstract:
A system and method for providing target selection and threat assessment for vehicle collision avoidance purposes that employ probability analysis of radar scan returns. The system determines a travel path of a host vehicle and provides a radar signal transmitted from a sensor on the host vehicle. The system receives multiple scan return points from detected objects, processes the scan return points to generate a distribution signal defining a contour of each detected object, and processes the scan return points to provide a position, a translation velocity and an angular velocity of each detected object. The system selects the objects that may enter the travel path of the host vehicle, and makes a threat assessment of those objects by comparing a number of scan return points that indicate that the object may enter the travel path to the number of the scan points that are received for that object.
Abstract:
Methods and systems for a vehicle are provided. In one embodiment, the method includes: receiving image data defining a plurality of images associated with an environment of the vehicle; determining, by a processor, feature points within at least one image of the plurality of images; selecting, by the processor, a subset of the feature points as ground points; determining, by the processor, a ground plane based on the subset of feature points; determining, by the processor, a ground normal vector from the ground plane; determining, by the processor, the ground normal vector based on a sliding widow method; determining, by the processor, a camera to ground alignment value based on the ground normal vector; and generating, by the processor, second image data based on the camera to ground alignment value.
Abstract:
Systems and methods for a vehicle are provided. In one embodiment, a method includes: receiving image data defining a plurality of images associated with an environment of the vehicle; determining, by a processor, feature points within at least one image of the plurality of images; selecting, by the processor, a subset of the feature points as ground points based on a fixed two dimensional image road mask and a three dimensional region; determining, by the processor, a ground plane based on the subset of feature points; determining, by the processor, a ground normal vector from the ground plane; determining, by the processor, a camera to ground alignment value based on the ground normal vector; and generating, by the processor, second image data based on the camera to ground alignment value.
Abstract:
A system in a vehicle includes a lidar system to obtain lidar data in a lidar coordinate system, a camera to obtain camera data in a camera coordinate system, and processing circuitry to automatically determine an alignment state resulting in a lidar-to-vehicle transformation matrix that projects the lidar data from the lidar coordinate system to a vehicle coordinate system to provide lidar-to-vehicle data. The alignment state is determined using the camera data.
Abstract:
A vehicle guidance system assists a driver in maneuvering a vehicle with respect to an object in a scene. The system includes a steering angle sensor, a camera device, a video processing module (VPM), and a human-machine interface (HMI). The sensor is configured to monitor the angular position of a vehicle wheel. The device is configured to capture an original image of a scene having the object. The VPM is configured to receive and process the original image from the device, detect the object in the original image, receive and process the angular position from the sensor, generate a vehicle trajectory based on the angular position, and orientate the trajectory with regard to the object. The HMI is configured to display a processed image associated with the original image and a trajectory overlay associated with the trajectory from the VPM. Together, the object is displayed in relation to the overlay.
Abstract:
Methods and systems are provided for processing attention data. In one embodiment, a method includes: receiving, by a processor, object data associated with at least one object of an exterior environment of the vehicle; receiving upcoming behavior data determined from a planned route of the vehicle; receiving gaze data sensed from an occupant of the vehicle; processing, by the processor, the object data, the upcoming behavior data, and the gaze data to determine an attention score associated with an attention of the occupant of the vehicle; and selectively generating, by the processor, signals to at least one of notify the occupant and control the vehicle based on the attention score.
Abstract:
A camera cleaning system for a vehicle includes a camera including a lens cover and a motor. A cleaning assembly includes an arm including a cleaning material. The motor selectively adjusts a position of the arm and the cleaning material relative to the lens cover of the camera. The motor selectively adjusts a position of the arm and the cleaning material relative to vehicle from a first position to a second position. The camera selectively generates video signals when the camera is in the first position. The cleaning assembly is in contact with the lens cover when the camera is in the second position.