Abstract:
A distance measuring apparatus includes: an acquisition unit that acquires a first image at a first viewpoint where an object is irradiated with a first light including a pattern, a second image at a second viewpoint different from the first viewpoint where the object is irradiated with the first light, a third image at the first viewpoint where the object is irradiated with a second light not including a pattern, and a fourth image at the second viewpoint where the object is irradiated with the second light; and a control unit that acquires information corresponding to a distance, by employing a fifth image obtained based on a ratio the first image and the third image and a sixth image obtained based on a ratio of the second image and the fourth image.
Abstract:
Embodiments relate to selecting textures for a user-supplied photographic image in image-based three-dimensional modeling. In a first embodiment, a computer-implemented method includes a method positioning a geographic structure using user-supplied photographic images of a geographic structure. In the method, a user-supplied photographic images inputted by a user are received. Embedded camera parameters that specify a position of the cameras when each user-supplied photographic image was taken and are embedded in each user-supplied photographic image are read. An estimated location of the geographic structure is automatically determined based on the embedded camera parameters in each user-supplied photographic image. Each user-supplied photographic image to be texture mapped to the three-dimensional model is enabled.
Abstract:
In one embodiment, a method for optimizing a set of matched features is provided. The method includes matching features between an optical image and a geo-referenced orthoimage to produce an initial set of matched features. An initial position solution is then determined for the optical image using the initial set of N matched features. The initial set of N matched features are then optimized based on a set of N sub-solutions and the initial position solution, wherein each of the N sub-solutions is a position solution using a different set of (N−1) matched features. A refined position solution is then calculated for the optical image using the optimized set of matched features.
Abstract:
A motion sensor device according to an embodiment of the present disclosure includes: an image sensor (701); first and second light sources (702, 703); and a controller (710) configured to control the image sensor (701) and the first and second light sources (702, 703). The controller (710) makes the image sensor capture a first frame with light emitted from the first light source at a first time, makes the image sensor capture a second frame with light emitted from the second light source at a second time, performs masking processing on a first image gotten by capturing the first frame and on a second image gotten by capturing the second frame based on a difference between the first and second images, and obtain information about the distance to an object shot in the first and second images based on the first and second images that have been subjected to the masking processing.
Abstract:
One aspect of the invention relates to a fully automatic method for calculating the current, geo-referenced position and alignment of a terrestrial scan-surveying device in situ on the basis of a current panoramic image recorded by the surveying device and at least one stored, geo-referenced 3D scan panoramic image.
Abstract:
Apparatus, systems, and methods are disclosed for tracking movement over the ground or other surfaces using two or more spaced apart cameras and an associated processing element to detect ground features in images from the cameras and determine tracking parameters based on the position of the detected ground features in the images.
Abstract:
Apparatus, systems, and methods are disclosed for tracking movement over the ground or other surfaces using two or more spaced apart cameras and an associated processing element to detect ground features in images from the cameras and determine tracking parameters based on the position of the detected ground features in the images.
Abstract:
A laser projection system for projecting an image on a workpiece includes a photogrammetry assembly and a laser projector, each communicating with a computer. The photogrammetry assembly includes a first camera for scanning the workpiece, and the laser projector projects a laser image to arbitrary locations. Light is conveyed from the direction of the workpiece to the photogrammetry assembly. The photogrammetry assembly signals the coordinates light conveyed toward the photogrammetry assembly to the computer with the computer being programmable for determining a geometric location of the laser image. The computer establishes a geometric correlation between the photogrammetry assembly, the laser projector, and the workpiece for realigning the laser image to a corrected geometric location relative to the workpiece.
Abstract:
In one embodiment, a method for optimizing a set of matched features is provided. The method includes matching features between an optical image and a geo-referenced orthoimage to produce an initial set of matched features. An initial position solution is then determined for the optical image using the initial set of N matched features. The initial set of N matched features are then optimized based on a set of N sub-solutions and the initial position solution, wherein each of the N sub-solutions is a position solution using a different set of (N−1) matched features. A refined position solution is then calculated for the optical image using the optimized set of matched features.
Abstract:
A method of balancing colors of three-dimensional (3D) points measured by a scanner from a first location and a second location. The scanner measures 3D coordinates and colors of first object points from a first location and second object points from a second location. The scene is divided into local neighborhoods, each containing at least a first object point and a second object point. An adapted second color is determined for each second object point based at least in part on the colors of first object points in the local neighborhood.