Abstract:
A method of displaying a captured image on a display device of a driven vehicle. A scene exterior of the driven vehicle is captured by an at least one vision-based imaging device mounted on the driven vehicle. Objects in a vicinity of the driven vehicle are sensed. An image of the captured scene is generated by a processor. The image is dynamically expanded to include sensed objects in the image. The sensed objects are highlighted in the dynamically expanded image. The highlighted objects identify vehicles proximate to the driven vehicle that are potential collisions to the driven vehicle. The dynamically expanded image is displayed with highlighted objects in the display device.
Abstract:
Systems, methods and devices to inhibit sensing reduction in imperfect sensing conditions are described. A multifunctional coating superposing a lens includes a self-cleaning layer and a heating layer. The self-cleaning layer defines an external surface configured to be exposed to an exterior environment. The external surface defines three-dimensional surface features thereon. The three-dimensional surface features are adjacently disposed arcuate features that inhibit adhering of solid particles to the external surface and wetting of the external surface. The heating layer is in thermal communication with the external surface. The heating layer is selectively actuated to provide thermal energy to the external surface through resistive heating. Each of the self-cleaning layer and the heating layer is transparent to a predetermined wavelength of light.
Abstract:
A vehicle guidance system assists a driver in maneuvering a vehicle with respect to an object in a scene. The system includes a steering angle sensor, a camera device, a video processing module (VPM), and a human-machine interface (HMI). The sensor is configured to monitor the angular position of a vehicle wheel. The device is configured to capture an original image of a scene having the object. The VPM is configured to receive and process the original image from the device, detect the object in the original image, receive and process the angular position from the sensor, generate a vehicle trajectory based on the angular position, and orientate the trajectory with regard to the object. The HMI is configured to display a processed image associated with the original image and a trajectory overlay associated with the trajectory from the VPM. Together, the object is displayed in relation to the overlay.
Abstract:
A vehicle includes at least one imaging device configured to generate image data indicative of a vicinity of the vehicle. The vehicle also includes a user interface display configured to display image data from the at least one imaging device. A vehicle controller is programmed to monitor image data for the presence of moving external objects within the vicinity, and to activate the user interface display to display image data in response to detecting a moving external object in the vicinity while the vehicle is at a rest condition. The controller is also programmed to assign a threat assessment value based on conditions in the vicinity of the vehicle, and upload image data to an off-board server in response to the threat assessment value being greater than a first threshold.
Abstract:
A smart sensor-cover apparatus for covering a sensor, such as a vehicle sensor, includes controllable layers, responsive to inputs, such as a wavelength-filtering controllable layer to selectively filter out select wavelengths of light; a polarizing layer controllable layer to selectively polarize or allow through light; a concealing controllable layer to change between a visible state and a concealed state; and an outermost, cleaning, layer configured to melt incident ice. The outermost layer in various embodiments has an outer surface positioned generally flush with an outer vehicle surface for operation of the apparatus, to promote the concealing effect when the concealing layer is not activate. The outermost layer may be configured to self-mend when scratched, and in some cases is hydrophobic, hydrophilic, or super hydrophilic outer surface. An insulating component, such as a glass or polycarbonate layer, is positioned between each adjacent controllable layer.
Abstract:
Methods and apparatus are provided for cleaning a sensor lens cover for an optical vehicle sensor. The method includes monitoring the sensor lens cover for a contaminant obstructing at least a portion of the sensor lens cover and determining the presence of the commandant and a contaminant type using information provided by one or more vehicle sensors. A cleaning modality selected based the contaminant type is activated and it is determined whether the cleaning modality has removed the contaminant from the sensor lens cover.
Abstract:
A method of detecting an intrusion includes sending an activation command to an intrusion detection system. In response to the activation command, at least one camera is activated. At least one image is obtained from the at least one camera representative of a surrounding area of the at least one camera. The at least one image is analyzed to determine if the intrusion is detected. An operator is then notified of the presence or absence of the intrusion.
Abstract:
A system and method for creating an enhanced virtual top-down view of an area in front of a vehicle, using images from left-front and right-front cameras. The enhanced virtual top-down view not only provides the driver with a top-down view perspective which is not directly available from raw camera images, but also removes the distortion and exaggerated perspective effects which are inherent in wide-angle lens images. The enhanced virtual top-down view also includes corrections for three types of problems which are typically present in de-warped images—including artificial protrusion of vehicle body parts into the image, low resolution and noise around the edges of the image, and a “double vision” effect for objects above ground level.
Abstract:
A vehicle guidance system assists a driver in maneuvering a vehicle with respect to an object in a scene. The system includes a steering angle sensor, a camera device, a video processing module (VPM), and a human-machine interface (HMI). The sensor is configured to monitor the angular position of a vehicle wheel. The device is configured to capture an original image of a scene having the object. The VPM is configured to receive and process the original image from the device, detect the object in the original image, receive and process the angular position from the sensor, generate a vehicle trajectory based on the angular position, and orientate the trajectory with regard to the object. The HMI is configured to display a processed image associated with the original image and a trajectory overlay associated with the trajectory from the VPM. Together, the object is displayed in relation to the overlay.
Abstract:
Examples of techniques for road feature detection using a vehicle camera system are disclosed. In one example implementation, a computer-implemented method includes receiving, by a processing device, an image from a camera associated with a vehicle on a road. The computer-implemented method further includes generating, by the processing device, a top view of the road based at least in part on the image. The computer-implemented method further includes detecting, by the processing device, lane boundaries of a lane of the road based at least in part on the top view of the road. The computer-implemented method further includes detecting, by the processing device, a road feature within the lane boundaries of the lane of the road using machine learning.