Abstract:
A vehicular vision system includes a plurality of imaging sensors disposed at the vehicle and a display screen disposed in the vehicle. A processing system is operable to process captured image data and to combine and/or manipulate captured image data to provide a three-dimensional representation of the exterior scene for display at the display screen. The processing system is operable to process the captured image data in accordance with a curved surface model, and is operable to process the image data to provide the three-dimensional representation as if seen by a virtual observer from a first virtual viewing point exterior of the vehicle having a first viewing direction. The processing system is operable to adjust the curved surface model when displaying the three-dimensional representation from a second virtual viewing point exterior of the vehicle having a second viewing direction to provide enhanced display of the images.
Abstract:
In one aspect, the invention is directed to a parking assist system for a vehicle, wherein the parking assist system has two modes of operation. In a first mode a first overlay is added to an image of a rearward field of view displayed to the vehicle driver. The first overlay includes a representation of a target parking position. In a second mode a second overlay is added to the field of view displayed to the vehicle driver. The second overlay includes a representation of a projected path for the vehicle based on a current vehicle steering angle, and a representation of a target path segment for the vehicle.
Abstract:
In a first aspect, the invention is directed to a wheel assembly for a vehicle, including a non-rotating support member, a wheel and an electric motor. Loads incurred during vehicle use can cause dynamic flexing of portions of the wheel. The wheel assembly in accordance with the first aspect of the invention has a load path for loads incurred by the wheel that passes from the wheel to the non-rotating support member without passing through the motor, thereby reducing a potential source of distortion of the gap in the motor (between the motor's rotor and stator) during the aforementioned flexing.
Abstract:
In a mobile control node system and method for a vehicle (630), the mobile control node (624) can interact, via a bi-directional radio link (642), with a transceiver processor unit (628) in the vehicle. The transceiver processor unit (628) is connected to a vehicle control system (120) and allows the mobile control node (624) to function as an input and output node on a vehicle control network (632), allowing remote control of the vehicle and providing functions such as remote or passive keyless entry. Additionally, the system provides a vehicle location function wherein the range and bearing between the mobile control node (624) and the vehicle (630) can be determined and displayed on the mobile control node (624). The range and bearing are calculated by determining the range between the mobile control node (624) and vehicle (630), preferably using a time of flight methodology, and by processing the travel distance of the mobile control node and compass data in order to triangulate the position of the vehicle (630) relative to the mobile control node (624).
Abstract:
A radar sensing system for a vehicle includes a radar sensor having a plurality of transmitting antennas and a plurality of receiving antennas. The transmitting antennas and the receiving antennas are arranged in multiple rows and columns of transmitting antennas and multiple rows and columns of receiving antennas. A control controls radar transmission by the transmitting antennas and receives outputs from the receiving antennas. The control applies two dimensional multiple input multiple output processing to outputs of the receiving antennas. With two dimensional multiple input multiple output processing applied to outputs of the receiving antennas, the transmitting antennas and the receiving antennas achieve an enhanced two dimensional virtual aperture.
Abstract:
A camera module for use in a vision system for a vehicle includes a housing portion, a lens holding portion and a securing element. The housing portion at least partially houses circuitry of the camera module and includes a first mating surface and a first perimeter flange around the first mating surface. The lens holding portion at least partially houses a lens assembly of the camera module and includes a second mating surface and a second perimeter flange around the second mating surface. The securing element is disposed along the first and second perimeter flanges and overlaps opposite surfaces of the first and second perimeter flanges to secure the housing portion relative to the lens holding portion.
Abstract:
A vision system for a vehicle includes a color camera disposed at a vehicle and having an exterior rearward field of view. The color camera includes an imaging array of photosensing pixels and at least one color filter disposed at or in front of at least some of the photosensing pixels. An image processor is operable to process image data captured by the color camera. Processing of captured image data by the image processor includes utilization of a color correction algorithm that includes a color correction matrix. The color correction matrix algorithm may utilize a 3x3 color correction matrix. The color correction algorithm may include a color correction matrix algorithm and a histogram algorithm that function to determine a color correction for the captured image data.
Abstract:
A vision system for a vehicle includes a rear camera disposed at a rear portion of the vehicle and having an exterior field of view rearward of the vehicle. The rear camera includes an imaging array and a lens system for imaging external sources of light at the imaging array. The lens system includes at least one asymmetric anamorphic lens optic. The asymmetric anamorphic lens optic may include a longitudinally truncated conical-shaped lens optic. The longitudinal axis of the longitudinally truncated conical-shaped lens optic may be generally vertical when the rear camera is disposed at the rear portion of the vehicle, with a smaller diameter portion at an upper region of the optic and a larger diameter portion at a lower region of the optic.
Abstract:
A method of providing test data for a vehicle may use the test data to verify the performance of a system in the vehicle under different environmental conditions (such as at night, while it is raining, during high glare conditions, during fog and/or the like). The method entails driving a test vehicle through a selected set of environmental conditions. Environment data, such as, for example, images, are captured while driving the test vehicle. The environment data relates to the environment outside the test vehicle. The environment data may be recorded to a memory. Additionally, environmental condition data relating to the environmental conditions outside the vehicle while it is being driven may be recorded to the memory.
Abstract:
A camera or vision system for establishing a composite image for displaying in a vehicle includes a first camera and a second camera and a controller. Each camera has a respective field of view that overlaps partially with the respective field of view of the other camera. Each camera has a respective imager for generating a respective preliminary digital image. The cameras together have a combined field of view. The controller is programmed to generate a final composite digital image that corresponds to a selected digital representation of the combined field of view of the cameras by using a remapping table to remap selected pixels from each of the preliminary digital images into selected positions of the final composite digital image. A plurality of methods for establishing a composite image are also provided.