Abstract:
An automated three dimensional mapping method estimating three dimensional models taking advantage of a plurality of images. Positions and attitudes for at least one camera are recorded when images are taken. The at least one camera is geometrically calibrated to indicate the direction of each pixel of an image. A stereo disparity is calculated for a plurality of image pairs covering a same scene position setting a disparity and a certainty measure estimate for each stereo disparity. The different stereo disparity estimates are weighted together to form a 3D model. The stereo disparity estimates are reweighted automatically and adaptively based on the estimated 3D model.
Abstract:
A method and arrangement for estimating 3D-models in a street environment using a stereo sensor technique. At least one pair of sensors are arranged in pairs and are mounted on a bracket. Each pair of sensors is positioned in a common plane. The sensors of each pair are positioned based upon contrast information such that low levels of contrasts in an image plane are avoided. The pairs of sensors are mutually positioned relative to an essentially horizontal plane of the bracket such that the sensors of a sensor pair are positioned horizontally at a distance from each other and one of the sensors above the horizontal plane of the bracket and the other under the horizontal plane.
Abstract:
The present invention relates to a system (200) and method for determining a relation between a first scene and a second scene. The method comprises the steps of generating at least one sensor image of a first scene with at least one sensor; accessing information related to at least one second scene, said second scene encompassing said first scene, and matching the sensor image with the second scene to map the sensor image onto the second scene. The step of accessing information related to the at least one second scene comprises accessing a 3D map comprising geocoded 3D coordinate data. The mapping involves associating geocoding information to a plurality of positions in the sensor image based on the coordinate data of the second scene.