Abstract:
A vehicular vision system includes a plurality of cameras and a processor operable to process image data captured by the cameras to generate images derived from image data captured by at least some of the cameras. A display screen, viewable by a driver of the vehicle, displays the generated images and a three dimensional vehicle representation as would be viewed from a virtual camera viewpoint exterior to and higher than the vehicle itself. A portion of the displayed vehicle representation may be at least partially transparent to enable viewing at the display screen of an object present exterior of the vehicle that would otherwise be partially hidden by non-transparent display of that portion of the vehicle representation. The three dimensional representation may include a vector model without solid surfaces, or may include a shape, body type, body style and/or color corresponding to that of the actual vehicle.
Abstract:
A vehicular lighting control system includes a controller configured to be disposed at a vehicle and operable to control interior lighting of the vehicle, with the interior lighting including (i) an interior light of the vehicle and/or (ii) a backlighting light of a display of the vehicle. The controller adjusts color of the interior lighting for different driving conditions. During daytime, the controller adjusts color of the interior lighting of the vehicle to follow a daytime color scheme. Responsive to a navigation input, the controller determines if an input destination has an estimated arrival time of the vehicle after daytime. Responsive to determination that the estimated arrival time of the vehicle is after daytime, the controller maintains the daytime color scheme for the interior lighting of the vehicle until the vehicle arrives at the input destination after daytime.
Abstract:
A method for generating surround view images derived from image data captured by cameras of a vehicular surround view vision system includes equipping a vehicle with a plurality of cameras disposed at the vehicle. Image data is captured by the cameras and provided to a control of the vehicle. The provided captured image data is processed to generate a first three-dimensional representation in accordance with a first curved surface model as if seen by a virtual observer from a first virtual viewing point exterior of the vehicle having a first viewing direction. The first representation is output to a display screen of the vehicle for display to the driver of the vehicle. Responsive to actuation of a user input, the processing is adjusted to generate a second three-dimensional representation in accordance with a second curved surface model, and the second representation is output to the display screen.
Abstract:
A method for stitching images captured by multiple vehicular cameras includes disposing a plurality of cameras at the vehicle so as to have respective fields of view exterior the vehicle. Image data captured by first and second cameras of the plurality of cameras is processed to detect an object present in an overlapping portion of the fields of view of the first and second cameras. Image data captured by the first and second cameras is stitched, via processing provided captured image data, to form stitched images. Stitching of captured image data is adjusted responsive to determination of a difference between a characteristic of a feature of a detected object as captured by the first camera and the characteristic of the feature of the detected object as captured by the second camera in order to mitigate misalignment of stitched images.
Abstract:
A vehicular control system includes a camera and a receiver disposed at the equipped vehicle. A control includes an image processor that processes image data captured by the camera to detect vehicles present within the field of view of the camera. The vehicular control system determines presence of another vehicle that constitutes a potential hazard existing exterior of the equipped vehicle responsive at least in part to a wireless communication originating from the other vehicle and received by the receiver. When the other vehicle enters the field of view of the camera, and responsive at least in part to the image processor processing image data captured by the camera, the control detects that the other vehicle is present within the field of view of the camera and controls at least one vehicle function of the equipped vehicle to mitigate collision with the other vehicle.
Abstract:
A vehicle vision system includes a plurality of cameras having respective fields of view exterior of the vehicle. A processor is operable to process image data captured by the cameras and to generate images of the environment surrounding the vehicle. The processor is operable to generate a three dimensional vehicle representation of the vehicle. A display screen is operable to display the generated images of the environment surrounding the vehicle and to display the generated vehicle representation of the equipped vehicle as would be viewed from a virtual camera viewpoint. At least one of (a) a degree of transparency of at least a portion of the displayed vehicle representation is adjustable by the system, (b) the vehicle representation comprises a vector model and (c) the vehicle representation comprises a shape, body type, body style and/or color corresponding to that of the actual equipped vehicle.
Abstract:
A driver assistance system for a vehicle includes a plurality of sensors disposed at a vehicle and operable to detect objects at least one of ahead of the vehicle and sideward of the vehicle. The driver assistance system includes a data processor operable to process data captured by the sensors to determine the presence of objects ahead and/or sideward of the vehicle. Responsive to the data processing, the driver assistance system is operable to determine at least one of respective speeds of the determined objects and respective directions of travel of the determined objects. The driver assistance system is operable to determine respective influence values for the determined objects. Responsive to the respective determined speeds and/or directions of travel of the determined objects and responsive to the determined respective influence values, at least one path of travel for the vehicle is determined that limits conflict with the determined objects.
Abstract:
A vision system for a vehicle includes a camera disposed at the vehicle and having a field of view exterior of the vehicle. The camera includes an RGB photosensor array having multiple rows of photosensing elements and multiple columns of photosensing elements. An in-line dithering algorithm is applied to individual lines of photosensing elements of the photosensor array in order to reduce at least one of color data transmission and color data processing. The in-line dithering algorithm includes at least one of an in-row dithering algorithm that is applied to individual rows of photosensing elements of the photosensor array and an in-column dithering algorithm that is applied to individual columns of photosensing elements of the photosensor array. The in-line dithering algorithm may be operable to determine most significant bits and least significant bits of color data of photosensing elements of the photosensor array.
Abstract:
A vision system of a vehicle includes a camera disposed at a vehicle and having a field of view exterior of the vehicle. The camera includes an imaging array having a plurality of photosensing elements arranged in a two dimensional array of rows and columns. The imaging array includes a plurality of sub-arrays comprising respective groupings of neighboring photosensing elements. An image processor is operable to perform a discrete cosine transformation of captured image data, and a Markov model compares at least one sub-array with a neighboring sub-array. The image processor is operable to adjust a classification of a sub-array responsive at least in part to the discrete cosine transformation and the Markov model.
Abstract:
A vehicle vision system includes a plurality of cameras disposed at a vehicle and having respective exterior fields of view and a display screen for displaying images derived from captured image data in a surround view format where captured image data is merged to provide a single composite display image from a virtual viewing position. A gesture sensing device is operable to sense a gesture made by the driver of the vehicle. A control provides a selected displayed image for viewing by the driver to assist the driver during a particular driving maneuver. The control is responsive to sensing by the gesture sensing device, whereby the driver can adjust the displayed image by at least one of (a) touch and (b) gesture to adjust at least one of (i) a virtual viewing location, (ii) a virtual viewing angle, (iii) a degree of zoom and (iv) a degree of panning.