Abstract:
A method, system, and apparatus provide the ability to globally register point cloud scans. A first and a second three-dimensional (3D) point cloud are acquired. The point clouds have a subset of points in common and there is no prior knowledge on an alignment between the point clouds. Particular points that are likely to be identified in the other point cloud are detected. Information about a normal of each of the detected particular points is retrieved. A descriptor (that only describes 3D information) is built on each of the detected particular points. Matching pairs of descriptors are determined. Rigid transformation hypotheses are estimated (based on the matching pairs) and represent a transformation. The hypotheses are accumulated into a fitted space, selected based on density, and validated based on a scoring. One of the hypotheses is then selected as a registration.
Abstract:
A method, system, apparatus, article of manufacture, and computer program product provide the ability to detect junctions. 3D pixel image data is obtained/acquired based on 2D image data and depth data. Within a given window over the 3D pixel image data, for each of the pixels within the window, an equation for a plane passing through the pixel is determined/computed. For all of the determined planes within the given window, an intersection of all of the planes is computed. A spectrum of the intersection/matrix is analyzed. Based on the spectrum, a determination is made if the pixel at the intersection is of 3 or more surfaces, 2 surfaces, or is 1 surface.
Abstract:
A system, apparatus, method, computer program product, and computer readable storage medium provide the ability to reconstruct a surface mesh. Photo image data is obtained from a set of overlapping photographic images. Scan data is obtained from a scanner. A point cloud is generated from a combination of the photo image data and the scan data. An initial rough mesh is estimated from the point cloud data. The initial rough mesh is iteratively refined into a refined mesh.
Abstract:
A method, system, apparatus, article of manufacture, and computer program product provide the ability to detect junctions. 3D pixel image data is obtained/acquired based on 2D image data and depth data. Within a given window over the 3D pixel image data, for each of the pixels within the window, an equation for a plane passing through the pixel is determined/computed. For all of the determined planes within the given window, an intersection of all of the planes is computed. A spectrum of the intersection/matrix is analyzed. Based on the spectrum, a determination is made if the pixel at the intersection is of 3 or more surfaces, 2 surfaces, or is 1 surface.
Abstract:
A system, apparatus, method, computer program product, and computer readable storage medium provide the ability to reconstruct a surface mesh. Photo image data is obtained from a set of overlapping photographic images. Scan data is obtained from a scanner. A point cloud is generated from a combination of the photo image data and the scan data. An initial rough mesh is estimated from the point cloud data. The initial rough mesh is iteratively refined into a refined mesh.
Abstract:
A method, system, and apparatus provide the ability to globally register point cloud scans. A first and a second three-dimensional (3D) point cloud are acquired. The point clouds have a subset of points in common and there is no prior knowledge on an alignment between the point clouds. Particular points that are likely to be identified in the other point cloud are detected. Information about a normal of each of the detected particular points is retrieved. A descriptor (that only describes 3D information) is built on each of the detected particular points. Matching pairs of descriptors are determined. Rigid transformation hypotheses are estimated (based on the matching pairs) and represent a transformation. The hypotheses are accumulated into a fitted space, selected based on density, and validated based on a scoring. One of the hypotheses is then selected as a registration.
Abstract:
A method, system, and computer program product provide the ability to dynamically generate a three-dimensional (3D) scene. A red green blue (RGB) image (in RGB color space) of the 3D scene is acquired. The RGB image is converted from RGB color space to a luminance (Y) and chrominance (UV) image in YUV color space (hat includes Y information and UV information). Reflectance information of the 3D scene is acquired from a laser scanner. Based on a blending function, the luminance information is blended with the reflectance information resulting in a blended YUV image. The blended YUV image is converted from YUV color space into RGB color space resulting in a blended RGB image that is output.
Abstract:
A method, system, and computer program product provide the ability to dynamically generate a three-dimensional (3D) scene. A red green blue (RGB) image (in RGB color space) of the 3D scene is acquired. The RGB image is converted from RGB color space to a luminance (Y) and chrominance (UV) image in YUV color space (hat includes Y information and UV information). Reflectance information of the 3D scene is acquired from a laser scanner. Based on a blending function, the luminance information is blended with the reflectance information resulting in a blended YUV image. The blended YUV image is converted from YUV color space into RGB color space resulting in a blended RGB image that is output.