Abstract:
A vehicular vision system includes a camera and an image processor operable to process captured image data. When the vehicle is towing a trailer, and based at least in part on image processing of image data captured by the camera during maneuvering of the vehicle and the trailer, the vision system estimates a length of the trailer. Image data captured during maneuvering of the vehicle and the trailer includes image data captured when the vehicle is maneuvered with the trailer at an angle relative to the vehicle. The vision system, responsive at least in part to image processing of captured image data, determines a trailer angle of the trailer that is towed by the vehicle, and is operable to determine a path of the trailer responsive to a steering angle of the vehicle and the determined trailer angle of the trailer and the estimated length of the trailer.
Abstract:
A vehicular vision system includes a camera and an image processor operable to process captured image data. When the vehicle is towing a trailer, and based at least in part on image processing of image data captured by the camera during maneuvering of the vehicle and the trailer, the vision system estimates a length of the trailer. Image data captured during maneuvering of the vehicle and the trailer includes image data captured when the vehicle is maneuvered with the trailer at an angle relative to the vehicle. The vision system, responsive at least in part to image processing of captured image data, determines a trailer angle of the trailer that is towed by the vehicle, and is operable to determine a path of the trailer responsive to a steering angle of the vehicle and the determined trailer angle of the trailer and the estimated length of the trailer.
Abstract:
A vision system for a vehicle includes a plurality of cameras having respective fields of view exterior of the vehicle. One of the cameras functions as a master camera and others of the cameras function as slave cameras. Responsive to processing of captured image data, the vision system is operable to synthesize a composite image derived from image data captured by at least two of the cameras, at least one of which is a slave camera. Operating parameters of the master camera are used by slave cameras. An electronic control unit sends via Ethernet connection operating parameters to a slave camera so that image sections of the composite image, when displayed to a driver of the vehicle by a display device of the vehicle, appear uniform in at least one of (i) brightness at the borders of the sections and (ii) color at the borders of the sections.
Abstract:
A vehicle vision system for a vehicle includes an image sensor having a field of view and capturing image data of a scene exterior of the vehicle. A monitor monitors electrical power consumption of the vehicle. At least one lighting system draws electrical power from the vehicle when operated. An image processor processes image data captured by the image sensor. The electrical power drawn by the at least one lighting system is varied at least in part responsive to processing of captured image data by the image processor in order to adjust fuel consumption by the vehicle.
Abstract:
A dynamic calibration method for calibrating a trailer angle detection system of a vehicle towing a trailer includes providing cameras configured to be disposed at the vehicle so as to have respective fields of view. Image data captured by at least some of the cameras is processed as the vehicle is driven forwardly and towing the trailer. A location of a portion of the trailer is determined via processing of captured image data. Responsive at least in part to processing of captured image data, a plurality of trailer parameters and vehicle-trailer interface parameters are determined. The plurality of trailer parameters and vehicle-trailer interface parameters are determined while the vehicle is driven forwardly and towing the trailer. The trailer angle detection system of the vehicle is calibrated responsive to determination that the trailer portion is not where the system expects it to be when the vehicle is traveling straight forward.
Abstract:
A vehicular vision system includes a plurality of cameras mounted at a vehicle, with each camera including a respective image sensor and having a respective field of view exterior of the vehicle. The system includes a control and a video output for transmitting a stream of video captured by an image sensor of a camera of the plurality of cameras, and a serial data interface permitting a microcontroller of the control to communicate with at least one electronic device of the vehicle. A switch is openable by the microcontroller to deactivate the video output and closable by the microcontroller to activate the video output. The microcontroller complies with messages received via a serial data bus. The control sends instructions to a camera of the plurality of cameras via the serial data bus and the control receives messages from an electronic device of the vehicle via the serial data bus.
Abstract:
A method of assembling a vehicular camera includes providing a front camera housing and a lens assembly and dispensing an adhesive bead in an uncured state at at least one of (i) an attaching portion of the lens assembly and (ii) the front camera housing. The attaching portion and the front housing member are mated together with the adhesive bead therebetween. With the attaching portion and front housing member mated together, lens optics of the lens assembly are aligned with respect to an imaging array of the front camera housing. After such alignment, the adhesive bead is cured to a first cure level via ultraviolet light exposure to join the lens assembly and the front housing member. The lens assembly and front camera housing so joined are moved to a further curing station, where the adhesive bead is further cured to a second cure level.
Abstract:
A method of assembling a vehicular camera includes providing a lens assembly having a base portion, a lens barrel and a plurality of optical elements in the lens barrel, and providing a circuit element having a circuit board and an imaging array. An adhesive bead is dispensed at the base portion and/or circuit element. The circuit element is placed at the base portion with the adhesive bead therebetween and the optical elements are aligned with the imaging array via a six axis robotic device when the circuit element is at the base portion and in contact with the adhesive bead. The adhesive bead is cured to a first cure level via exposure of the adhesive bead to ultraviolet light. The assembly is moved to a second curing stage and the adhesive bead is cured to a second cure level via heating the adhesive bead.
Abstract:
A vehicular imaging system includes an imaging device having a single imaging sensor capturing image data within a field of view. A control within the vehicle includes an image processor and receives image data captured by the single imaging sensor and receives vehicle data via a communication bus of the vehicle. Responsive at least in part to image processing of captured image data, the control detects converging road features along the road the vehicle is travelling and determines a point of intersection where the converging road features would converge. Responsive at least in part to image processing of captured image data, the control automatically corrects for misalignment of the imaging device mounted at the vehicle.
Abstract:
An imaging system for a vehicle includes an imaging sensor and a video display device. The imaging system generates an overlay that is electronically superimposed on the displayed images to assist a driver of the vehicle when executing a backup maneuver. The overlay has first, second and third overlay zones, with the overlay zones indicative of respective distance ranges from the rear of the vehicle to respective distances. As indicated to the driver viewing the video display screen when executing a backup maneuver, the first distance is closer to the rear of the vehicle than the second distance and the second distance is closer to the rear of the vehicle than the third distance. The first overlay zone may be a first color and the second overlay zone may be a second color and the third overlay zone may be a third color.