Abstract:
A method for processing high dynamic range data from a nonlinear camera includes; generating an input image comprising a plurality of pixels, each pixel having an initial pixel value, wherein the initial pixel values are generated using a camera transition curve; generating a first lookup table representing a combination of an inverse function and a re-compression function, the first lookup table having input values and output values, wherein each input value is linked to one output value, the inverse function is the inverse of the camera transition curve, the re-compression function is a smooth and continuous function having a slope at each input value which is greater than or equal to a corresponding slope of the camera transition curve, the first lookup table is generated such that the inverse function precedes the re-compression function; and generating a first image by converting the initial pixel values using the first lookup table.
Abstract:
A method of determining the distance of an object from an automated vehicle based on images taken by a monocular image acquiring device. The object is recognized with an object-class by means of an image processing system. Respective position data are determined from the images using a pinhole camera model based on the object-class. Position data indicating in world coordinates the position of a reference point of the object with respect to the plane of the road is used with a scaling factor of the pinhole camera model estimated by means of a Bayes estimator using the position data as observations and under the assumption that the reference point of the object is located on the plane of the road with a predefined probability. The distance of the object from the automated vehicle is calculated from the estimated scaling factor using the pinhole camera model.
Abstract:
A method of estimating an orientation of a camera relative to a surface includes providing a first image and a subsequent second image captured by the camera; selecting a first point from the first image and a second point from the second image, where the first and second points represent the same object; defining a first optical flow vector connecting the first point and the second point; carrying out a first estimation step comprising estimating two components of the normal vector in the camera coordinate system by using the first optical flow vector and restricting parameter space to only the two components of the normal vector, wherein a linear equation system derived from a homography matrix that represents a projective transformation between the first image and the second image is provided and the two components of the normal vector in the camera coordinate system are estimated by solving the linear equation system; and determining the orientation of the camera relative to the surface using the results of the first estimation step.
Abstract:
A method of estimating an orientation of a camera relative to a surface includes providing a first image and a subsequent second image captured by the camera; selecting a first point from the first image and a second point from the second image, where the first and second points represent the same object; defining a first optical flow vector connecting the first point and the second point; carrying out a first estimation step comprising estimating two components of the normal vector in the camera coordinate system by using the first optical flow vector and restricting parameter space to only the two components of the normal vector, wherein a linear equation system derived from a homography matrix that represents a projective transformation between the first image and the second image is provided and the two components of the normal vector in the camera coordinate system are estimated by solving the linear equation system; and determining the orientation of the camera relative to the surface using the results of the first estimation step.
Abstract:
A method of generating a confidence measure for an estimation derived from images captured by a camera mounted on a vehicle includes: capturing consecutive training images by the camera while the vehicle is moving; determining ground-truth data for the training images; computing optical flow vectors from the training images and estimating a first output signal based on the optical flow vectors for each of the training images, the first output signal indicating an orientation of the camera; classifying the first output signal for each of the training images as a correct signal or a false signal depending on how good the first output signal fits to the ground-truth data; determining optical flow field properties for each of the training images derived from the training images; and generating a separation function that separates the optical flow field properties into two classes based on the classification of the first output signal.
Abstract:
A method of determining the distance of an object from an automated vehicle based on images taken by a monocular image acquiring device. The object is recognized with an object-class by means of an image processing system. Respective position data are determined from the images using a pinhole camera model based on the object-class. Position data indicating in world coordinates the position of a reference point of the object with respect to the plane of the road is used with a scaling factor of the pinhole camera model estimated by means of a Bayes estimator using the position data as observations and under the assumption that the reference point of the object is located on the plane of the road with a predefined probability. The distance of the object from the automated vehicle is calculated from the estimated scaling factor using the pinhole camera model.
Abstract:
In a method of generating a training image for teaching of a camera-based object recognition system suitable for use on an automated vehicle which shows an object to be recognized in a natural object environment, the training image is generated as a synthetic image by a combination of a base image taken by a camera and of a template image in that a structural feature is obtained from the base image and is replaced with a structural feature obtained from the template image by means of a shift-map algorithm.
Abstract:
In a method of generating a training image for teaching of a camera-based object recognition system suitable for use on an automated vehicle which shows an object to be recognized in a natural object environment, the training image is generated as a synthetic image by a combination of a base image taken by a camera and of a template image in that a structural feature is obtained from the base image and is replaced with a structural feature obtained from the template image by means of a shift-map algorithm.
Abstract:
In a method for the detection and tracking of lane markings from a motor vehicle, an image of a space located in front of the vehicle is captured by means of an image capture device at regular intervals. The picture elements that meet a predetermined detection criterion are identified as detected lane markings in the captured image. At least one detected lane marking as a lane marking to be tracked is subjected to a tracking process. At least one test zone is defined for each detected lane marking. With the aid of intensity values of the picture elements associated with the test zone, at least one parameter is determined. The detected lane marking is assigned to one of several lane marking categories, depending on the parameter.