-
公开(公告)号:US20230041314A1
公开(公告)日:2023-02-09
申请号:US17967236
申请日:2022-10-17
Applicant: FARO Technologies, Inc.
Inventor: Manuel CAPUTO , Louis BERGMANN
Abstract: A virtual reality (VR) system that includes a three-dimensional (3D) point cloud having a plurality of points, a VR viewer having a current position, a graphics processing unit (GPU), and a central processing unit (CPU). The CPU determines a field-of-view (FOV) based at least in part on the current position of the VR viewer, selects, using occlusion culling, a subset of the points based at least in part on the FOV, and provides them to the GPU. The GPU receives the subset of the plurality of points from the CPU and renders an image for display on the VR viewer based at least in part on the received subset of the plurality of points. The selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate.
-
公开(公告)号:US20240004076A1
公开(公告)日:2024-01-04
申请号:US18339620
申请日:2023-06-22
Applicant: FARO Technologies, Inc.
Inventor: Louis BERGMANN , Vadim DEMKIV , Daniel FLOHR
IPC: G01S17/89 , G06V20/64 , G06V10/30 , G06F18/214
CPC classification number: G01S17/89 , G06V20/647 , G06V10/30 , G06V20/64 , G06F18/2148 , G06V2201/121
Abstract: A system and a method for removing artifacts from a 3D coordinate data are provided. The system includes one or more processors and a measuring device. The one or more processors are operable to receive training data and train the 3D measuring device to identify artifacts by analyzing the training data. The one or more processors are further operable to identify artifacts in live data based on the training of the processor system. The one or more processors are further operable to generate clear scan data by filtering the artifacts from the live data and output the clear scan data.
-
公开(公告)号:US20210065431A1
公开(公告)日:2021-03-04
申请号:US17011282
申请日:2020-09-03
Applicant: FARO Technologies, Inc.
Inventor: Louis BERGMANN , Daniel FLOHR
Abstract: An example method for training a neural network includes generating a training data set of point clouds. The training data set includes pairs of closed surfaces point clouds and non-closed surfaces point clouds. The method further includes, for each of the closed surface point clouds and the non-closed surface point clouds, generating a two-dimensional (2D) image by rendering a three-dimensional scene. The 2D image for the non-closed surfaces point clouds includes a gap in a surface, and the 2D image for the closed surfaces point clouds are free of gaps. The method further includes training the neural network to generate a trained neural network. The method further includes filling, using the trained neural network, gaps between scan points of the 2D image, and de-noising, using the trained neural network, scan point cloud data to generate a closed surface image of the scan point cloud data.
-
-