Robust tracking using point and line features
    2.
    发明授权
    Robust tracking using point and line features 有权
    使用点和线特征的鲁棒跟踪

    公开(公告)号:US09406137B2

    公开(公告)日:2016-08-02

    申请号:US14278928

    申请日:2014-05-15

    Abstract: Disclosed embodiments pertain to apparatus, systems, and methods for robust feature based tracking. In some embodiments, a score may be computed for a camera captured current image comprising a target object. The score may be based on one or more metrics determined from a comparison of features in the current image and a prior image captured by the camera. The comparison may be based on an estimated camera pose for the current image. In some embodiments, one of a point based, an edge based, or a combined point and edge based feature correspondence method may be selected based on a comparison of the score with a point threshold and/or a line threshold, the point and line thresholds being obtained from a model of the target. The camera pose may be refined by establishing feature correspondences using the selected method between the current image and a model image.

    Abstract translation: 公开的实施例涉及用于基于鲁棒特征的跟踪的装置,系统和方法。 在一些实施例中,可以针对包括目标对象的摄像机拍摄的当前图像计算得分。 得分可以基于从当前图像中的特征与由相机捕获的先前图像的比较确定的一个或多个度量。 比较可以基于当前图像的估计的摄像机姿态。 在一些实施例中,可以基于得分与点阈值和/或线阈值的比较来选择基于点,基于边缘的或基于组合的基于点和边缘的特征对应方法中的一个,点和线阈值 从目标的模型中获得。 可以通过使用当前图像和模型图像之间的所选方法建立特征对应来改进相机姿态。

    Online reference generation and tracking for multi-user augmented reality
    4.
    发明授权
    Online reference generation and tracking for multi-user augmented reality 有权
    多用户增强现实的在线参考生成和跟踪

    公开(公告)号:US09558557B2

    公开(公告)日:2017-01-31

    申请号:US14662555

    申请日:2015-03-19

    CPC classification number: G06T7/74 G06T19/006 G06T2207/10004 G06T2207/30244

    Abstract: A multi-user augmented reality (AR) system operates without a previously acquired common reference by generating a reference image on the fly. The reference image is produced by capturing at least two images of a planar object and using the images to determine a pose (position and orientation) of a first mobile platform with respect to the planar object. Based on the orientation of the mobile platform, an image of the planar object, which may be one of the initial images or a subsequently captured image, is warped to produce the reference image of a front view of the planar object. The reference image may be produced by the mobile platform or by, e.g., a server. Other mobile platforms may determine their pose with respect to the planar object using the reference image to perform a multi-user augmented reality application.

    Abstract translation: 多用户增强现实(AR)系统通过即时生成参考图像而无需先前获取的公共参考。 通过捕获平面物体的至少两个图像并使用图像来确定第一移动平台相对于平面对象的姿势(位置和方位)来产生参考图像。 基于移动平台的方向,可以将平面对象的图像(其可以是初始图像之一或随后的拍摄图像)扭曲,以产生平面对象的前视图的参考图像。 参考图像可以由移动平台或者例如由服务器产生。 其他移动平台可以使用参考图像来确定其相对于平面对象的姿态来执行多用户增强现实应用。

    ONLINE REFERENCE GENERATION AND TRACKING FOR MULTI-USER AUGMENTED REALITY
    5.
    发明申请
    ONLINE REFERENCE GENERATION AND TRACKING FOR MULTI-USER AUGMENTED REALITY 有权
    在线参考生成和追踪用于多用户的现实

    公开(公告)号:US20150193935A1

    公开(公告)日:2015-07-09

    申请号:US14662555

    申请日:2015-03-19

    CPC classification number: G06T7/74 G06T19/006 G06T2207/10004 G06T2207/30244

    Abstract: A multi-user augmented reality (AR) system operates without a previously acquired common reference by generating a reference image on the fly. The reference image is produced by capturing at least two images of a planar object and using the images to determine a pose (position and orientation) of a first mobile platform with respect to the planar object. Based on the orientation of the mobile platform, an image of the planar object, which may be one of the initial images or a subsequently captured image, is warped to produce the reference image of a front view of the planar object. The reference image may be produced by the mobile platform or by, e.g., a server. Other mobile platforms may determine their pose with respect to the planar object using the reference image to perform a multi-user augmented reality application.

    Abstract translation: 多用户增强现实(AR)系统通过即时生成参考图像而无需先前获取的公共参考。 通过捕获平面物体的至少两个图像并使用图像来确定第一移动平台相对于平面对象的姿势(位置和方位)来产生参考图像。 基于移动平台的方向,可以将平面对象的图像(其可以是初始图像之一或随后的拍摄图像)扭曲,以产生平面对象的前视图的参考图像。 参考图像可以由移动平台或者例如由服务器产生。 其他移动平台可以使用参考图像来确定其相对于平面对象的姿态来执行多用户增强现实应用。

    Systems and Methods for Feature-Based Tracking
    6.
    发明申请
    Systems and Methods for Feature-Based Tracking 审中-公开
    基于特征的跟踪的系统和方法

    公开(公告)号:US20140369557A1

    公开(公告)日:2014-12-18

    申请号:US14263866

    申请日:2014-04-28

    Abstract: Disclosed embodiments pertain to feature based tracking. In some embodiments, a camera pose may be obtained relative to a tracked object in a first image and a predicted camera pose relative to the tracked object may be determined for a second image subsequent to the first image based, in part, on a motion model of the tracked object. An updated SE(3) camera pose may then be obtained based, in part on the predicted camera pose, by estimating a plane induced homography using an equation of a dominant plane of the tracked object, wherein the plane induced homography is used to align a first lower resolution version of the first image and a first lower resolution version of the second image by minimizing the sum of their squared intensity differences. A feature tracker may be initialized with the updated SE(3) camera pose.

    Abstract translation: 公开的实施例涉及基于特征的跟踪。 在一些实施例中,可以相对于第一图像中的跟踪对象获得相机姿态,并且可以针对基于运动模型的第一图像之后的第二图像来确定相对于被跟踪对象的预测相机姿态 的跟踪对象。 然后可以部分地基于预测的摄像机姿态通过使用跟踪对象的主平面的方程估计平面诱导的单应性来获得更新的SE(3)相机姿态,其中使用平面诱导的单应性来对齐 第一图像的第一低分辨率版本和第二图像的第一较低分辨率版本,通过最小化其平方强度差的总和。 功能跟踪器可以用更新的SE(3)相机姿态来初始化。

Patent Agency Ranking