Abstract:
A method generates a binary descriptor associated with a given point in a current frame of a succession of video frames obtained by an apparatus such as an image sensor. The method includes determining a pattern of points pairs around said given point in the current frame, and performing intensity comparison processing between the two points of each pair. The apparatus is likely to move in a rotation between the previous frame and the current frame. The method includes processing the pattern of points of the current frame with tridimensional rotation information representative of the apparatus rotation between the previous frame and the current frame and obtained from inertial measurements provided by at least one inertial sensor.
Abstract:
A method determines a movement of an apparatus between capturing first and second images. The method includes testing model hypotheses of the movement by for example a RANSAC algorithm, operating on a set of first points in the first image and assumed corresponding second points in the second image to deliver the best model hypothesis. The testing includes, for each first point, calculating a corresponding estimated point using the tested model hypothesis, determining the back-projection error between the estimated point and the second point in the second image, and comparing each back projection error with a threshold. The testing comprises for each first point, determining a correction term based on an estimation of the depth of the first point in the first image and an estimation of the movement between the first and second images, and determining the threshold associated with the first point by using said correction term.
Abstract:
A method determines a movement of an apparatus between capturing first and second images. The method includes testing model hypotheses of the movement by for example a RANSAC algorithm, operating on a set of first points in the first image and assumed corresponding second points in the second image to deliver the best model hypothesis. The testing includes, for each first point, calculating a corresponding estimated point using the tested model hypothesis, determining the back-projection error between the estimated point and the second point in the second image, and comparing each back projection error with a threshold. The testing comprises for each first point, determining a correction term based on an estimation of the depth of the first point in the first image and an estimation of the movement between the first and second images, and determining the threshold associated with the first point by using said correction term.
Abstract:
A method estimates an ego-motion of an apparatus between a first image and a second image of a succession of images captured by the apparatus, in a SLAM type algorithm containing a localization part including the ego-motion estimating and a mapping part. The ego-motion comprises a 3D rotation of the apparatus and a position variation of the apparatus in the 3D space, and the ego-motion estimating comprises performing a first part and performing a second part after having performed the first part, the first part including estimating the 3D rotation of the apparatus and the second part including, the 3D rotation having been estimated, estimating the position variation of the apparatus in the 3D space.
Abstract:
Method of estimating a position variation of a motion of an apparatus between a first instant and a second instant, said motion including a rotation of the apparatus and said position variation, said position variation including a position and a velocity, wherein estimating said position variation comprises performing a particles filtering for estimating said position and velocity from the probabilistic-weighted average of the particles, said particles filter using a known estimation of said rotation and being parameterized for taking into account a quality of said rotation estimation.
Abstract:
The method includes for each current pair of first and second successive video images determining movement between the two images. The determining includes a phase of testing homography model hypotheses on the movement by a RANSAC type algorithm operating on a set of points in the first image and first assumed corresponding points in the second image so as to deliver one of the homography model hypothesis that defines the movement. The test phase includes a test of first homography model hypotheses of the movement obtained from a set of second points in the first image and second assumed corresponding points in the second image. At least one second homography model hypothesis is obtained from auxiliary information supplied by an inertial sensor and representative of a movement of the image sensor between the captures of the two successive images of the pair.