Abstract:
A vehicle imaging system includes an image capture device capturing an image exterior of a vehicle. The captured image includes at least a portion of a sky scene. A processor generates a virtual image of a virtual sky scene from the portion of the sky scene captured by the image capture device. The processor determines a brightness of the virtual sky scene from the virtual image. The processor dynamically adjusts a brightness of the captured image based the determined brightness of the virtual image. A rear view mirror display device displays the adjusted captured image.
Abstract:
A method for detecting an eyes-off-the-road condition based on an estimated gaze direction of a driver of a vehicle includes monitoring facial feature points of the driver within image input data captured by an in-vehicle camera device. A location for each of a plurality of eye features for an eyeball of the driver is detected based on the monitored facial features. A head pose of the driver is estimated based on the monitored facial feature points. The gaze direction of the driver is estimated based on the detected location for each of the plurality of eye features and the estimated head pose.
Abstract:
A system for integration and communication of aftermarket vehicle components with existing vehicle components through methods comprising an apparatus receiving an input signal comprising a raw data set, and engaging a processor configured to execute computer-executable instructions and communicate with an existing motion control system. The methods also include the apparatus communicating, or alternately generating, a vehicle identification sequence derived from the vehicle identification data set. Finally, the methods include the apparatus communicating a calibration data set comprising computer-executable instructions including a vehicle calibration sequence, executing the vehicle identification sequence, and executing the vehicle calibration sequence.
Abstract:
The present disclosure to systems for integration and communication of aftermarket vehicle components with existing vehicle components through methods comprising an apparatus receiving an input signal comprising a raw data set, and engaging a processor configured to execute computer-executable instructions and communicate with an existing motion control system. The methods also include the apparatus communicating, or alternately generating, a vehicle identification sequence derived from the vehicle identification data set. Finally, the methods include the apparatus communicating a calibration data set comprising computer-executable instructions including a vehicle calibration sequence, executing the vehicle identification sequence, and executing the vehicle calibration sequence.
Abstract:
A method for determining a wet road surface condition for a vehicle driving on a road. An image exterior of the vehicle is captured by an image capture device at a first and second instance of time. Potential objects and feature objects are detected on a ground surface of the road of travel at the first instance of time and the second instance of time. A determination is made whether the ground surface includes a mirror effect reflective surface based on a triangulation technique utilizing the feature points in the captured images at the first instance of time and the second instance of time. A wet driving surface indicating signal is generated in response to the determination that the ground surface includes a mirror effect reflective surface.
Abstract:
A system and method for providing visual assistance through a graphic overlay super-imposed on a back-up camera image for assisting a vehicle operator when backing up a vehicle to align a tow ball with a trailer tongue. The method includes providing camera modeling to correlate the camera image in vehicle coordinates to world coordinates, where the camera modeling provides the graphic overlay to include a tow line having a height in the camera image that is determined by an estimated height of the trailer tongue. The method also includes providing vehicle dynamic modeling for identifying the motion of the vehicle as it moves around a center of rotation. The method then predicts the path of the vehicle as it is being steered including calculating the center of rotation.
Abstract:
A system in a vehicle includes a lidar system to transmit incident light and receive reflections from one or more objects as a point cloud of points. The system also includes processing circuitry to identify planar points and to identify edge points of the point cloud. Each set of planar points forms a linear pattern and each edge point is between two sets of planar points, and the processing circuitry identifies each point of the points of the point cloud as being within a virtual beam among a set of virtual beams. Each virtual beam of the set of virtual beams representing a horizontal band of the point cloud.
Abstract:
A LIDAR-to-LIDAR alignment system includes a memory and an autonomous driving module. The memory stores first and second points based on outputs of first and second LIDAR sensors. The autonomous driving module performs a validation process to determine whether alignment of the LIDAR sensors satisfy an alignment condition. The validation process includes: aggregating the first and second points in a vehicle coordinate system to provide aggregated LIDAR points; based on the aggregated LIDAR points, performing (i) a first method including determining pitch and roll differences between the first and second LIDAR sensors, (ii) a second method including determining a yaw difference between the first and second LIDAR sensors, or (iii) point cloud registration to determine rotation and translation differences between the first and second LIDAR sensors; and based on results of the first method, the second method or the point cloud registration, determining whether the alignment condition is satisfied.
Abstract:
Methods, apparatus, and systems for sensor alignment include acquiring a translational vector, a first calibration point location and a second calibration point location, determining an expected rotational orientation in response to the translational vector, the first calibration point location and the second calibration point location, capturing an image of the a first calibration point and the second calibration point, determining a first position of the first calibration point and a second position of the second calibration point in response to the image, calculating a calculated rotational orientation in response to the first position of the first calibration point and the second position of the second calibration point, determining a calibration value in response to the calculated rotational orientation, storing the calibration value and controlling a vehicle in response to the calibration value and a subsequent image.
Abstract:
Method for sensor alignment including detecting a depth point cloud including a first object and a second object, generating a first control point in response to a location of the first object within the depth point cloud and a second control point in response to a location of the second object within the depth point cloud, capturing an image of a second field of view including a third object, generating a third control point in response to a location of the third object detected in response to the image, calculating a first reprojection error in response to the first control point and the third control point and a second reprojection error in response to the second control point and the third control point, generating an extrinsic parameter in response to the first reprojection error in response to the first reprojection error being less than the second reprojection error.