DISPARITY IMAGE FUSION METHOD FOR MULTIBAND STEREO CAMERAS

    公开(公告)号:US20220207776A1

    公开(公告)日:2022-06-30

    申请号:US17604288

    申请日:2020-03-05

    Abstract: A disparity image fusion method for multiband stereo cameras belongs to the field of image processing and computer vision. The method obtains pixel disparity confidence information by using the intermediate output of binocular disparity estimation. The confidence information can be used to judge the disparity credibility of the position and assist disparity fusion. The confidence acquisition process makes full use of the intermediate output of calculation, and can be conveniently embedded into the traditional disparity estimation process, with high calculation efficiency and simple and easy operation. In the disparity image fusion method for multiband stereo cameras proposed by the method, the disparity diagrams participating in the fusion are obtained according to the binocular images of the corresponding bands, which makes full use of the information of each band and simultaneously avoiding introducing uncertainty and errors.

    METHOD FOR 3D SCENE DENSE RECONSTRUCTION BASED ON MONOCULAR VISUAL SLAM

    公开(公告)号:US20200273190A1

    公开(公告)日:2020-08-27

    申请号:US16650331

    申请日:2019-01-07

    Abstract: The present invention provides a method of dense 3D scene reconstruction based on monocular camera and belongs to the technical field of image processing and computer vision, which builds the reconstruction strategy with fusion of traditional geometry-based depth computation and convolutional neural network (CNN) based depth prediction, and formulates depth reconstruction model solved by efficient algorithm to obtain high-quality dense depth map. The system is easy to construct because of its low requirement for hardware resources and achieves dense reconstruction only depending on ubiquitous monocular cameras. Camera tracking of feature-based SLAM provides accurate pose estimation, while depth reconstruction model with fusion of sparse depth points and CNN-inferred depth achieves dense depth estimation and 3D scene reconstruction; The use of fast solver in depth reconstruction avoids solving inversion of large-scale sparse matrix, which improves running speed of the algorithm and ensures the real-time dense 3D scene reconstruction based on monocular camera.

    METHOD FOR FULLY AUTOMATICALLY DETECTING CHESSBOARD CORNER POINTS

    公开(公告)号:US20220148213A1

    公开(公告)日:2022-05-12

    申请号:US17442937

    申请日:2020-03-05

    Abstract: The present invention discloses a method for fully automatically detecting chessboard corner points, and belongs to the field of image processing and computer vision. Full automatic detection of chessboard corner points is completed by setting one or a plurality of marks with colors or certain shapes on a chessboard to mark an initial position, shooting an image and conducting corresponding processing, using a homography matrix H calculated by initial pixel coordinates of a unit grid in a pixel coordinate system and manually set world coordinates in a world coordinate system to expand outwards, and finally spreading to the whole chessboard region. The method has the advantages of simple procedure and easy implementation; the principle of expanding outwards by a homography matrix is used, so that the running speed of the algorithm is fast; and the corner points obtained by a robustness enhancement algorithm is more accurate, so that the situation of inaccurate corner point detection in the condition of complex illumination is avoided.

    MONOCULAR UNSUPERVISED DEPTH ESTIMATION METHOD BASED ON CONTEXTUAL ATTENTION MECHANISM

    公开(公告)号:US20210390723A1

    公开(公告)日:2021-12-16

    申请号:US17109838

    申请日:2020-12-02

    Abstract: The present invention provides a monocular unsupervised depth estimation method based on contextual attention mechanism, belonging to the technical field of image processing and computer vision. The invention adopts a depth estimation method based on a hybrid geometric enhancement loss function and a context attention mechanism, and adopts a depth estimation sub-network, an edge sub-network and a camera pose estimation sub-network based on convolutional neural network to obtain high-quality depth maps. The present invention uses convolutional neural network to obtain the corresponding high-quality depth map from the monocular image sequences in an end-to-end manner. The system is easy to construct, the program framework is easy to implement, and the algorithm runs fast; the method uses an unsupervised method to solve the depth information, avoiding the problem that ground-truth data is difficult to obtain in the supervised method.

    METHOD FOR ESTIMATING HIGH-QUALITY DEPTH MAPS BASED ON DEPTH PREDICTION AND ENHANCEMENT SUBNETWORKS

    公开(公告)号:US20200265597A1

    公开(公告)日:2020-08-20

    申请号:US16649322

    申请日:2019-01-07

    Abstract: The present invention provides a method for estimating high-quality depth map based on depth prediction and enhancement sub-networks, belonging to the technical field of image processing and computer vision. This method constructs depth prediction subnetwork to predict depth information from color image and uses depth enhancement subnetwork to obtain high-quality depth map by recovering the low-resolution depth map. It is easy to construct the system, and can obtain the high-quality depth map from the corresponding color image directly by the well-trained end to end network. The algorithm is easy to be implemented. It uses high-frequency component of color image to help to recover the lost depth boundaries information caused by down-sampling operators in depth prediction sub-network, and finally obtains high-quality and high-resolution depth maps. It uses spatial pyramid pooling structure to increase the accuracy of depth map prediction for multi-scale objects in the scene.

    MULTISPECTRAL CAMERA DYNAMIC STEREO CALIBRATION ALGORITHM BASED ON SALIENCY FEATURES

    公开(公告)号:US20220028043A1

    公开(公告)日:2022-01-27

    申请号:US17284394

    申请日:2020-03-05

    Abstract: A multispectral camera dynamic stereo calibration algorithm is based on saliency features. The joint self-calibration method comprises the following steps: step 1: conducting de-distortion and binocular correction on an original image according to internal parameters and original external parameters of an infrared camera and a visible light camera. Step 2: Detecting the saliency of the infrared image and the visible light image respectively based on a histogram contrast method. Step 3: Extracting feature points on the infrared image and the visible light image. Step 4: Matching the feature points extracted in the previous step. Step 5: judging a feature point coverage area. Step 6: correcting the calibration result. The present invention solves the change of a positional relationship between an infrared camera and a visible light camera due to factors such as temperature, humidity and vibration.

    DEPTH ESTIMATION AND COLOR CORRECTION METHOD FOR MONOCULAR UNDERWATER IMAGES BASED ON DEEP NEURAL NETWORK

    公开(公告)号:US20210390339A1

    公开(公告)日:2021-12-16

    申请号:US17112499

    申请日:2020-12-04

    Abstract: The invention discloses a depth estimation and color correction method for monocular underwater images based on deep neural network, which belongs to the field of image processing and computer vision. The framework consists of two parts: style transfer subnetwork and task subnetwork. The style transfer subnetwork is constructed based on generative adversarial network, which is used to transfer the apparent information of underwater images to land images and obtain abundant and effective synthetic labeled data, while the task subnetwork combines the underwater depth estimation and color correction tasks with the stack network structure, carries out collaborative learning to improve their respective accuracies, and reduces the gap between the synthetic underwater image and the real underwater image through the domain adaptation strategy, so as to improve the network's ability to process the real underwater image.

Patent Agency Ranking