Abstract:
Apparatus, systems, and methods are disclosed for tracking movement over the ground or other surfaces using two or more spaced apart cameras and an associated processing element to detect ground features in images from the cameras and determine tracking parameters based on the position of the detected ground features in the images.
Abstract:
A motion sensor device including: an image sensor; first and second light sources; and a controller configured to control the image sensor and the first and second light sources. The controller makes the image sensor capture a first frame with light emitted from the first light source at a first time, makes the image sensor capture a second frame with light emitted from the second light source at a second time, performs masking processing on a first image gotten by capturing the first frame and on a second image gotten by capturing the second frame based on a difference between the first and second images, and obtain information about the distance to an object shot in the first and second images based on the first and second images that have been subjected to the masking processing.
Abstract:
A laser projection system for projecting an image on a workpiece includes a photogrammetry assembly and a laser projector, each communicating with a computer. The photogrammetry assembly includes a first camera for scanning the workpiece, and the laser projector projects a laser image to arbitrary locations. Light is conveyed from the direction of the workpiece to the photogrammetry assembly. The photogrammetry assembly signals the coordinates light conveyed toward the photogrammetry assembly to the computer with the computer being programmable for determining a geometric location of the laser image. The computer establishes a geometric correlation between the photogrammetry assembly, the laser projector, and the workpiece for realigning the laser image to a corrected geometric location relative to the workpiece.
Abstract:
A Method for generating scaled terrain information while operating a bulldozer. The bulldozer may include a driving unit comprising a set of drive wheels, a motor connected to at least one of the drive wheels, a blade for altering the surface of the terrain, at least one camera for capturing images of the environment, the camera being positioned and aligned in a known manner relative to the bulldozer, and a controlling and processing unit. A method may include moving the bulldozer while concurrently generating a set of image data by capturing an image series of terrain sections with the at least one camera so that at least two images of the image series cover an amount of identical points in the terrain, and either applying a simultaneous localisation and mapping (SLAM) algorithm or a stereo photogrammetry algorithm to the set of image data and thereby deriving terrain data.
Abstract:
A laser projection system for projecting an image on a workpiece includes a photogrammetry assembly and a laser projector, each communicating with a computer. The photogrammetry assembly includes a first camera for scanning the workpiece, and the laser projector projects a laser image to arbitrary locations. Light is conveyed from the direction of the workpiece to the photogrammetry assembly. The photogrammetry assembly signals the coordinates light conveyed toward the photogrammetry assembly to the computer with the computer being programmable for determining a geometric location of the laser image. The computer establishes a geometric correlation between the photogrammetry assembly, the laser projector, and the workpiece for realigning the laser image to a corrected geometric location relative to the workpiece.
Abstract:
Embodiments relate to selecting textures for a user-supplied photographic image in image-based three-dimensional modeling. In a first embodiment, a computer-implemented method includes a method positioning a geographic structure using user-supplied photographic images of a geographic structure. In the method, a user-supplied photographic images inputted by a user are received. Embedded camera parameters that specify a position of the cameras when each user-supplied photographic image was taken and are embedded in each user-supplied photographic image are read. An estimated location of the geographic structure is automatically determined based on the embedded camera parameters in each user-supplied photographic image. Each user-supplied photographic image to be texture mapped to the three-dimensional model is enabled.
Abstract:
A method and device for displaying desired positions in a live image of a construction site. The method mat include recording at least one position-referenced image of the construction site; linking at least one desired position to the position-referenced image; storing the position-referenced image together with desired position linkage in an electronic memory; recording a live image of the construction site, in particular in the form of a video, wherein the live image and the position-referenced image at least partially represent an identical detail of the construction site; retrieving the stored position-referenced image from the memory; fitting the position-referenced image with the live image, so that the desired position linked to the position-referenced image can be overlaid in a position-faithful manner on the live image; and position-faithful display of the desired position as a graphic marking in the live image.
Abstract:
An approach is provided for pole extraction from optical imagery. The approach involves, for instance, processing a plurality of images using a machine learning model to generate a plurality of redundant observations of a pole-like object and/or their semantic keypoints respectively depicted in the plurality of images. The approach also involves performing a photogrammetric triangulation of the plurality of redundant observations to determine three-dimensional coordinate data of the pole-like object and/or their semantic keypoints. The approach further involves providing the three-dimensional coordinate data of the pole-like object as an output.
Abstract:
A method for producing a depth map from a detection region of the Earth's surface, a detection region being arranged in an underground pipeline, wherein the method includes recording at least one image sequence via at least one camera, determining the position and orientation of the camera corresponding to each individual recording, determining a spatial position and orientation of the underground pipeline arranged in the detection region, producing the depth map of the detection region via a plane sweep method based on the individual recordings and the associated camera positions, where the maximum depth region of the plane sweep method is subdivided into a total of N sections in an adaptive manner, i.e., in accordance with a predetermined minimum layer thickness for the ground covering the underground pipeline, via a predetermined number of planes spaced differently from one another and extending parallel with respect to one another.
Abstract:
A method for detecting a displacement of a mobile platform includes obtaining a first frame and a second frame using an imaging device associated with the mobile platform and determining the displacement of the mobile platform based upon the first frame and the second frame.