Abstract:
A method of estimating an orientation of a camera relative to a surface includes providing a first image and a subsequent second image captured by the camera; selecting a first point from the first image and a second point from the second image, where the first and second points represent the same object; defining a first optical flow vector connecting the first point and the second point; carrying out a first estimation step comprising estimating two components of the normal vector in the camera coordinate system by using the first optical flow vector and restricting parameter space to only the two components of the normal vector, wherein a linear equation system derived from a homography matrix that represents a projective transformation between the first image and the second image is provided and the two components of the normal vector in the camera coordinate system are estimated by solving the linear equation system; and determining the orientation of the camera relative to the surface using the results of the first estimation step.
Abstract:
A method of estimating an orientation of a camera relative to a surface includes providing a first image and a subsequent second image captured by the camera; selecting a first point from the first image and a second point from the second image, where the first and second points represent the same object; defining a first optical flow vector connecting the first point and the second point; carrying out a first estimation step comprising estimating two components of the normal vector in the camera coordinate system by using the first optical flow vector and restricting parameter space to only the two components of the normal vector, wherein a linear equation system derived from a homography matrix that represents a projective transformation between the first image and the second image is provided and the two components of the normal vector in the camera coordinate system are estimated by solving the linear equation system; and determining the orientation of the camera relative to the surface using the results of the first estimation step.
Abstract:
A method of determining the distance of an object from an automated vehicle based on images taken by a monocular image acquiring device. The object is recognized with an object-class by means of an image processing system. Respective position data are determined from the images using a pinhole camera model based on the object-class. Position data indicating in world coordinates the position of a reference point of the object with respect to the plane of the road is used with a scaling factor of the pinhole camera model estimated by means of a Bayes estimator using the position data as observations and under the assumption that the reference point of the object is located on the plane of the road with a predefined probability. The distance of the object from the automated vehicle is calculated from the estimated scaling factor using the pinhole camera model.
Abstract:
A method of generating a confidence measure for an estimation derived from images captured by a camera mounted on a vehicle includes: capturing consecutive training images by the camera while the vehicle is moving; determining ground-truth data for the training images; computing optical flow vectors from the training images and estimating a first output signal based on the optical flow vectors for each of the training images, the first output signal indicating an orientation of the camera; classifying the first output signal for each of the training images as a correct signal or a false signal depending on how good the first output signal fits to the ground-truth data; determining optical flow field properties for each of the training images derived from the training images; and generating a separation function that separates the optical flow field properties into two classes based on the classification of the first output signal.
Abstract:
A method of determining the distance of an object from an automated vehicle based on images taken by a monocular image acquiring device. The object is recognized with an object-class by means of an image processing system. Respective position data are determined from the images using a pinhole camera model based on the object-class. Position data indicating in world coordinates the position of a reference point of the object with respect to the plane of the road is used with a scaling factor of the pinhole camera model estimated by means of a Bayes estimator using the position data as observations and under the assumption that the reference point of the object is located on the plane of the road with a predefined probability. The distance of the object from the automated vehicle is calculated from the estimated scaling factor using the pinhole camera model.