Abstract:
An integrated camera, ambient light detection, and rain sensor assembly suitable for installation behind a windshield of a driver operated vehicle or an automated vehicle includes an imager-device. The imager-device is formed of an array of pixels configured to define a central-portion and a periphery-portion of the imager-device. Each pixel of the array of pixels includes a plurality of sub-pixels. Each pixel in the central-portion is equipped with a red/visible/visible/visible filter (RVVV filter) arranged such that each pixel in the central-portion includes a red sub-pixel and three visible-light sub-pixels. Each pixel in the periphery-portion is equipped with a red/green/blue/near-infrared filter (RGBN filter) arranged such that each pixel in the periphery-portion includes a red sub-pixel, a green sub-pixel, a blue sub-pixel, and a near-infrared sub-pixel.
Abstract:
An optical sensor system adapted to operate through a window of a vehicle includes a lens, a plurality of optoelectronic devices, and an optical device. The lens is configured to direct light from a field-of-view toward a focal plane. The plurality of optoelectronic devices are arranged proximate to the focal plane. The plurality of optoelectronic devices includes a first optoelectronic device operable to detect an image from a first portion of the field-of-view, and a second optoelectronic device operable to detect light from a second portion of the field-of-view distinct from the first portion. The optical device is configured to direct light from outside the field-of-view toward the second portion.
Abstract:
An image system configured to record a scanned image of an area. The system includes a single two-dimensional (2D) imager and a rotatable mirror. The 2D imager is formed of a two-dimensional (2D) array of light detectors. The 2D imager is operable in a line-scan mode effective to individually sequence an activated line of light detectors at a time. The rotatable mirror is configured to rotate about an axis parallel to a plane defined by the rotatable mirror. The rotation is effective to vary an angle of the rotatable mirror to pan a projected image of the area across the 2D imager. The angle of the rotatable mirror and the activated line of the 2D imager are synchronized such that the scanned image recorded by the 2D imager is inverted with respect to the projected image.
Abstract:
An integrated camera, ambient light detection, and rain sensor assembly suitable for installation behind a windshield of a driver operated vehicle or an automated vehicle includes an imager-device. The imager-device is formed of an array of pixels configured to define a central-portion and a periphery-portion of the imager-device. Each pixel of the array of pixels includes a plurality of sub-pixels. Each pixel in the central-portion is equipped with a red/visible/visible/visible filter (RVVV filter) arranged such that each pixel in the central-portion includes a red sub-pixel and three visible-light sub-pixels. Each pixel in the periphery-portion is equipped with a red/green/blue/near-infrared filter (RGBN filter) arranged such that each pixel in the periphery-portion includes a red sub-pixel, a green sub-pixel, a blue sub-pixel, and a near-infrared sub-pixel.
Abstract:
A ground-classifier system that classifies ground-cover proximate to an automated vehicle includes a lidar, a camera, and a controller. The lidar that detects a point-cloud of a field-of-view. The camera that renders an image of the field-of-view. The controller is configured to define a lidar-grid that segregates the point-cloud into an array of patches, and define a camera-grid that segregates the image into an array of cells. The point-cloud and the image are aligned such that a patch is aligned with a cell. A patch is determined to be ground when the height is less than a height-threshold. The controller is configured to determine a lidar-characteristic of cloud-points within the patch, determine a camera-characteristic of pixels within the cell, and determine a classification of the patch when the patch is determined to be ground, wherein the classification of the patch is determined based on the lidar-characteristic and the camera-characteristic.
Abstract:
A system, controller, and method for aligning a stereo camera of a vehicle mounted object detection system that includes a first camera and a second camera mounted spaced apart on a vehicle. An image from each camera at two different times is used to determine an observed displacement of an object relative to the vehicle. A predicted displacement of the object relative to the vehicle is also determined using either a difference of vehicle position measured based on other vehicle measurements or GPS, or a difference of size of the object in images taken at the two different times. Alignment is provided by determining a triangulation correction based on a difference of the observed displacement and the predicted displacement to correct for misalignment of the cameras.
Abstract:
A system, controller, and method for aligning a stereo camera of a vehicle mounted object detection system that includes a first camera and a second camera mounted spaced apart on a vehicle. An image from each camera at two different times is used to determine an observed displacement of an object relative to the vehicle. A predicted displacement of the object relative to the vehicle is also determined using either a difference of vehicle position measured based on other vehicle measurements or GPS, or a difference of size of the object in images taken at the two different times. Alignment is provided by determining a triangulation correction based on a difference of the observed displacement and the predicted displacement to correct for misalignment of the cameras.
Abstract:
An illustrative example method of making a camera includes assembling a plurality of lens elements, a sensor, and a housing to establish an assembly with each of the lens elements and the sensor at least partially in the housing. The assembly is then situated adjacent a circuit board substrate. At least the sensor is secured to the circuit board substrate using surface mount technology (SMT) and the assembly becomes fixed relative to the circuit board substrate.
Abstract:
A driver assistance system includes an imaging device mounted to a vehicle that provides an image of a vicinity of the vehicle. A mobile device carried by a driver provides range rate information regarding a change in position of the mobile device. A processor determines that there is at least one object in the vicinity of the vehicle based on the image, determines the speed of vehicle movement based on the range rate information, determines relative movement between the vehicle and the at least one object based on at least the image, and determines a risk of collision between the vehicle and the at least one object based on the determined speed and the determined relative movement. A driver assist output provides a risk indication of the determined risk of collision to the driver.
Abstract:
An illustrative example camera device includes a sensor that is configured to detect radiation. A first portion of the sensor has a first field of vision and is used for a first imaging function. A distortion correction prism directs radiation outside the first field of vision toward the sensor. A lens element between the distortion correcting prism and the sensor includes a surface at an oblique angle relative to a sensor axis. The lens element directs radiation from the distortion correcting prism toward a second portion of the sensor that has a second field of vision and is used for a second imaging function. The sensor provides a first output for the first imaging function based on radiation detected at the first portion of the sensor. The sensor provides a second output for the second imaging function based on radiation detection at the second portion.