Abstract:
A system and method for generating a virtual reality scene from scanned point cloud data having user defined content is provided. The system includes a coordinate measurement device operable to measure three-dimensional coordinates. A computing device having a processor is operably coupled to the coordinate measurement device, the processor being operable to generate a point cloud data and insert user defined content into the point cloud data in response to an input from a user, the processor further being operable to generate a virtual reality data file based at least in part on the point cloud data with the user defined content. A virtual reality device is operably coupled to the computing device, the virtual reality device being operable to display the virtual reality data file to the user.
Abstract:
A mobile three-dimensional (3D) measuring system includes a 3D measuring device, a multi-legged stand coupled to the 3D measuring device, and a motorized dolly detachably coupled to the multi-legged stand.
Abstract:
A method for measuring and registering 3D coordinates has a 3D scanner measure a first collection of 3D coordinates of points from a first registration position. The 3D scanner collects 2D scan sets as 3D measuring device moves from first to second registration positions. A processor determines first and second translation values and a first rotation value based on collected 2D scan sets. 3D scanner measures a second collection of 3D coordinates of points from second registration position. Processor adjusts the second collection of points relative to first collection of points based at least in part on first and second translation values and first rotation value. Processor identifies a correspondence among registration targets in first and second collection of 3D coordinates, and uses this correspondence to further adjust the relative position and orientation of first and second collection of 3D coordinates.
Abstract:
A method for measuring and registering three-dimensional (3D) by measuring 3D coordinates with a 3D scanner in a first registration position, measuring two-dimensional (2D) coordinates with the 3D scanner by projecting a beam of light in plane onto the object while the 3D scanner moves from the first registration position to a second registration position, measuring 3D coordinates with the 3D scanner at the second registration position, and determining a correspondence among targets in the first and second registration positions while the 3D scanner moves between the second and third registration positions.
Abstract:
A method for measuring and registering 3D coordinates has a 3D scanner measure a first collection of 3D coordinates of points from a first registration position and a second collection of 3D coordinates of points from a second registration position. In between these positions, the 3D measuring device collects depth-camera images. A processor determines first and second translation values and a first rotation value based on the depth-camera images. The processor identifies a correspondence among registration targets in the first and second collection of 3D coordinates based at least in part on the first and second translation values and the first rotation value. The processor uses this correspondence and the first and second collection of 3D coordinates to determine 3D coordinates of a registered 3D collection of points.
Abstract:
A system and method for scanning an environment and generating an annotated 2D map is provided. The method includes acquiring, via a 2D scanner, a plurality of 2D coordinates on object surfaces in the environment, the 2D scanner having a light source and an image sensor, the image sensor being arranged to receive light reflected from the object points. A first 360° image is acquired at a first position of the environment, via a 360° camera having a plurality of cameras and a controller, the controller being operable to merge the images acquired by the plurality of cameras to generate an image having a 360° view, the 360° camera being movable from the first to a second position. A 2D map is generated based at least in part on the plurality of two-dimensional coordinates of points. The first 360° image is integrated with the 2D map.
Abstract:
Techniques are described for converting a 2D map into a 3D mesh. The 2D map of the environment is generated using data captured by a 2D scanner. Further, a set of features is identified from a subset of panoramic images of the environment that are captured by a camera. Further, the panoramic images from the subset are aligned with the 2D map using the features that are extracted. Further, 3D coordinates of the features are determined using 2D coordinates from the 2D map and a third coordinate based on a pose of the camera. The 3D mesh is generated using the 3D coordinates of the features.
Abstract:
A method and system for generating a three-dimensional (3D) map of an environment is provided. An example method includes receiving a 3D scan and portions of a 2D map of the environment and receiving coordinates of the scan position in the 2D map. The method further includes associating the coordinates of the scan position with the portion of the 2D map. The method further includes linking the coordinates with the portion of the 2D map. The method further includes storing submap data for each of the plurality of submaps into a data object associated respective submaps. The method further includes performing a loop closure algorithm on each of the plurality of submaps. The method further includes, for each of the plurality of submaps for which the position anchor of the submap changed during performing the loop closure algorithm, determining a new data object position for the data objects.
Abstract:
A system and method for scanning an environment and generating an annotated 2D map is provided. The method includes acquiring, via a 2D scanner, a plurality of 2D coordinates on object surfaces in the environment, the 2D scanner having a light source and an image sensor, the image sensor being arranged to receive light reflected from the object points. A first 360° image is acquired at a first position of the environment, via a 360° camera having a plurality of cameras and a controller, the controller being operable to merge the images acquired by the plurality of cameras to generate an image having a 360° view, the 360° camera being movable from the first to a second position. A 2D map is generated based at least in part on the plurality of two-dimensional coordinates of points. The first 360° image is integrated with the 2D map.
Abstract:
A method for performing a simultaneous location and mapping of a scanner device includes detecting a set of lines in a point cloud, and identifying a semantic feature based on the set of lines. The method further includes assigning a first scan position of the scanner device in the surrounding environment at the present time t1 as a landmark, and linking the landmark with the portion of the map. The method further includes determining that the scanner device has moved, at time t2, to the scan position that was marked as the landmark based on identifying said semantic feature in another scan-data. In response, a second scan position at time t2 is determined. Also, a displacement vector is determined for the map based on a difference between the first scan position and the second scan position. Subsequently, a revised second scan position is computed based on the displacement vector.