-
公开(公告)号:US20160350904A1
公开(公告)日:2016-12-01
申请号:US15232229
申请日:2016-08-09
Applicant: Huawei Technologies Co., Ltd.
Inventor: Guofeng Zhang , Hujun Bao , Kangkan Wang , Jiong Zhou
CPC classification number: G06T7/85 , G06K9/6202 , G06T7/337 , G06T7/579 , G06T7/593 , G06T7/73 , G06T17/00 , G06T2207/10028 , G06T2207/20221 , G06T2207/30244
Abstract: Embodiments of the present disclosure disclose a static object reconstruction method and system that are applied to the field of graph and image processing technologies. In the embodiments of the present disclosure, when a static object reconstruction system does not obtain, by means of calculation, an extrinsic camera parameter in a preset time when calculating the extrinsic camera parameter based on a three-dimensional feature point, it indicates that depth data collected by a depth camera is lost or damaged, and a two-dimensional feature point is used to calculate the extrinsic camera parameter to implement alignment of point clouds of a frame of image according to the extrinsic camera parameter. In this way, a two-dimensional feature point and a three-dimensional feature point are mixed, which can implement that a static object can also be successfully reconstructed when depth data collected by a depth camera is lost or damaged.
Abstract translation: 本公开的实施例公开了应用于图形和图像处理技术领域的静态对象重建方法和系统。 在本公开的实施例中,当静态对象重建系统通过计算在基于三维特征点计算外在相机参数时在预设时间内获得外部相机参数时,其指示深度 由深度摄像机收集的数据丢失或损坏,并且使用二维特征点来计算外在摄像机参数,以根据外在摄像机参数来实现帧图像的点云的对准。 以这种方式,二维特征点和三维特征点被混合,这可以实现当深度相机收集的深度数据丢失或损坏时也可以成功地重建静态对象。
-
公开(公告)号:US09830701B2
公开(公告)日:2017-11-28
申请号:US15232229
申请日:2016-08-09
Applicant: Huawei Technologies Co., Ltd.
Inventor: Guofeng Zhang , Hujun Bao , Kangkan Wang , Jiong Zhou
IPC: G06K9/00 , G06T7/00 , G06T17/00 , G06K9/62 , G06T7/80 , G06T7/33 , G06T7/73 , G06T7/579 , G06T7/593
CPC classification number: G06T7/85 , G06K9/6202 , G06T7/337 , G06T7/579 , G06T7/593 , G06T7/73 , G06T17/00 , G06T2207/10028 , G06T2207/20221 , G06T2207/30244
Abstract: Embodiments of the present disclosure disclose a static object reconstruction method and system that are applied to the field of graph and image processing technologies. In the embodiments of the present disclosure, when a static object reconstruction system does not obtain, by means of calculation, an extrinsic camera parameter in a preset time when calculating the extrinsic camera parameter based on a three-dimensional feature point, it indicates that depth data collected by a depth camera is lost or damaged, and a two-dimensional feature point is used to calculate the extrinsic camera parameter to implement alignment of point clouds of a frame of image according to the extrinsic camera parameter. In this way, a two-dimensional feature point and a three-dimensional feature point are mixed, which can implement that a static object can also be successfully reconstructed when depth data collected by a depth camera is lost or damaged.
-
公开(公告)号:US20160379375A1
公开(公告)日:2016-12-29
申请号:US15263668
申请日:2016-09-13
Applicant: Huawei Technologies Co., Ltd.
Inventor: Yadong Lu , Guofeng Zhang , Hujun Bao
CPC classification number: G06T7/246 , G01C11/06 , G06K9/00201 , G06K9/00664 , G06T7/579 , G06T2207/10021 , G06T2207/30244
Abstract: A camera tracking method includes obtaining an image set of a current frame; separately extracting feature points of each image in the image set of the current frame; obtaining a matching feature point set of the image set according to a rule that scene depths of adjacent regions on an image are close to each other; separately estimating, a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame; estimating a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points; and optimizing the motion parameter of the binocular camera on the next frame.
Abstract translation: 相机跟踪方法包括获得当前帧的图像集; 分别提取当前帧的图像集中的每个图像的特征点; 根据图像上的相邻区域的场景深度彼此接近的规则获得图像集合的匹配特征点集合; 单独估计与当前帧的局部坐标系中的每对匹配特征点对应的场景点的三维位置和下一帧的局部坐标系中场景点的三维位置; 根据对应于匹配特征点的场景点的三维位置,使用质心坐标的不变性到刚性变换来估计下一帧上的双目相机的运动参数; 并且在下一帧上优化双目相机的运动参数。
-
-