Abstract:
A first map comprising local features and 3D locations of the local features is generated, the local features comprising visible features in a current image and a corresponding set of covisible features. A second map comprising prior features and 3D locations of the prior features may be determined, where each prior feature: was first imaged at a time prior to the first imaging of any of the local features, and lies within a threshold distance of at least one local feature. A first subset comprising previously imaged local features in the first map and a corresponding second subset of the prior features in the second map is determined by comparing the first and second maps, where each local feature in the first subset corresponds to a distinct prior feature in the second subset. A transformation mapping a subset of local features to a subset of prior features is determined.
Abstract:
Disclosed embodiments pertain to apparatus, systems, and methods for robust feature based tracking. In some embodiments, a score may be computed for a camera captured current image comprising a target object. The score may be based on one or more metrics determined from a comparison of features in the current image and a prior image captured by the camera. The comparison may be based on an estimated camera pose for the current image. In some embodiments, one of a point based, an edge based, or a combined point and edge based feature correspondence method may be selected based on a comparison of the score with a point threshold and/or a line threshold, the point and line thresholds being obtained from a model of the target. The camera pose may be refined by establishing feature correspondences using the selected method between the current image and a model image.
Abstract:
A first map comprising local features and 3D locations of the local features is generated, the local features comprising visible features in a current image and a corresponding set of covisible features. A second map comprising prior features and 3D locations of the prior features may be determined, where each prior feature: was first imaged at a time prior to the first imaging of any of the local features, and lies within a threshold distance of at least one local feature. A first subset comprising previously imaged local features in the first map and a corresponding second subset of the prior features in the second map is determined by comparing the first and second maps, where each local feature in the first subset corresponds to a distinct prior feature in the second subset. A transformation mapping a subset of local features to a subset of prior features is determined.
Abstract:
A multi-user augmented reality (AR) system operates without a previously acquired common reference by generating a reference image on the fly. The reference image is produced by capturing at least two images of a planar object and using the images to determine a pose (position and orientation) of a first mobile platform with respect to the planar object. Based on the orientation of the mobile platform, an image of the planar object, which may be one of the initial images or a subsequently captured image, is warped to produce the reference image of a front view of the planar object. The reference image may be produced by the mobile platform or by, e.g., a server. Other mobile platforms may determine their pose with respect to the planar object using the reference image to perform a multi-user augmented reality application.
Abstract:
A multi-user augmented reality (AR) system operates without a previously acquired common reference by generating a reference image on the fly. The reference image is produced by capturing at least two images of a planar object and using the images to determine a pose (position and orientation) of a first mobile platform with respect to the planar object. Based on the orientation of the mobile platform, an image of the planar object, which may be one of the initial images or a subsequently captured image, is warped to produce the reference image of a front view of the planar object. The reference image may be produced by the mobile platform or by, e.g., a server. Other mobile platforms may determine their pose with respect to the planar object using the reference image to perform a multi-user augmented reality application.
Abstract:
Disclosed embodiments pertain to feature based tracking. In some embodiments, a camera pose may be obtained relative to a tracked object in a first image and a predicted camera pose relative to the tracked object may be determined for a second image subsequent to the first image based, in part, on a motion model of the tracked object. An updated SE(3) camera pose may then be obtained based, in part on the predicted camera pose, by estimating a plane induced homography using an equation of a dominant plane of the tracked object, wherein the plane induced homography is used to align a first lower resolution version of the first image and a first lower resolution version of the second image by minimizing the sum of their squared intensity differences. A feature tracker may be initialized with the updated SE(3) camera pose.