Abstract:
An accelerometer in a mobile device is calibrated by taking multiple measurements of acceleration vectors when the mobile device is held stationary at different orientations with respect to a plane normal. A circle is calculated that fits respective tips of measured acceleration vectors in the accelerometer coordinate system. The radius of the circle and the lengths of the measured acceleration vectors are used to calculate a rotation angle for aligning the accelerometer coordinate system with the mobile device surface. A gyroscope in the mobile device is calibrated by taking multiple measurements of a rotation axis when the mobile device is rotated at different rates with respect to the rotation axis. A line is calculated that fits the measurements. The angle between the line and an axis of the gyroscope coordinate system is used to align the gyroscope coordinate system with the mobile device surface.
Abstract:
Embodiments disclosed pertain to the use of user equipment (UE) for the generation of a 3D exterior envelope of a structure based on captured images and a measurement set associated with each captured image. In some embodiments, a sequence of exterior images of a structure is captured and a corresponding measurement set comprising Inertial Measurement Unit (IMU) measurements, wireless measurements (including Global Navigation Satellite (GNSS) measurements) and/or other non-wireless sensor measurements may be obtained concurrently. A closed-loop trajectory of the UE in global coordinates may be determined and a 3D structural envelope of the structure may be obtained based on the closed loop trajectory and feature points in a subset of images selected from the sequence of exterior images of the structure.
Abstract:
Embodiments disclosed pertain to systems, method s and apparatus for the initialization of Computer Vision (CV) applications on user devices (UDs) comprising a camera and a display. In some embodiments, an optimal camera trajectory for initialization of a Computer Vision (CV) application may be determined based on an initial camera pose and an estimated pivot distance. For example, the initial camera pose may be estimated based on a first image captured by the camera. Further, the display may be updated in real-time with an indication of a desired movement direction for the camera. In some embodiments, the indication of desired movement direction may be based, in part, on a current camera pose and the optimal trajectory, where the current camera pose may be estimated based on a current image captured by the camera.
Abstract:
Embodiments of the present invention are directed toward providing intelligent sampling strategies that make efficient use of an always-on camera. To do so, embodiments can utilize sensor information to determine contextual information regarding the mobile device and/or a user of the mobile device. A sampling rate of the always-on camera can then be modulated based on the contextual information.
Abstract:
Embodiments disclosed pertain to the use of user equipment (UE) for the generation of a 3D exterior envelope of a structure based on captured images and a measurement set associated with each captured image. In some embodiments, a sequence of exterior images of a structure is captured and a corresponding measurement set comprising Inertial Measurement Unit (IMU) measurements, wireless measurements (including Global Navigation Satellite (GNSS) measurements) and/or other non-wireless sensor measurements may be obtained concurrently. A closed-loop trajectory of the UE in global coordinates may be determined and a 3D structural envelope of the structure may be obtained based on the closed loop trajectory and feature points in a subset of images selected from the sequence of exterior images of the structure.
Abstract:
Systems, apparatus and methods for estimating gravity and/or scale in a mobile device are presented. A difference between an image-based pose and an inertia-based pose is using to update the estimations of gravity and/or scale. The image-based pose is computed from two poses and is scaled with the estimation of scale prior to the difference. The inertia-based pose is computed from accelerometer measurements, which are adjusted by the estimation for gravity.
Abstract:
A Visual Inertial Tracker (VIT), such as a Simultaneous Localization And Mapping (SLAM) system based on an Extended Kalman Filter (EKF) framework (EKF-SLAM) can provide drift correction in calculations of a pose (translation and orientation) of a mobile device by obtaining location information regarding a target, obtaining an image of the target, estimating, from the image of the target, measurements relating to a pose of the mobile device based on the image and location information, and correcting a pose determination of the mobile device using an EKF, based, at least in part, on the measurements relating to the pose of the mobile device.
Abstract:
Methods and apparatus relating to enabling augmented reality applications using eye gaze tracking are disclosed. An exemplary method according to the disclosure includes displaying an image to a user of a scene viewable by the user, receiving information indicative of an eye gaze of the user, determining an area of interest within the image based on the eye gaze information, determining an image segment based on the area of interest, initiating an object recognition process on the image segment, and displaying results of the object recognition process.
Abstract:
Exemplary methods, apparatuses, and systems infer a context of a user or device. A computer vision parameter is configured according to the inferred context. Performing a computer vision task, in accordance with the configured computer vision parameter. The computer vision task may by at least one of: a visual mapping of an environment of the device, a visual localization of the device or an object within the environment of the device, or a visual tracking of the device within the environment of the device.
Abstract:
Method, apparatus, and computer program product for merging multiple maps for computer vision based tracking are disclosed. In one embodiment, a method of merging multiple maps for computer vision based tracking comprises receiving a plurality of maps of a scene in a venue from at least one mobile device, identifying multiple keyframes of the plurality of maps of the scene, and merging the multiple keyframes to generate a global map of the scene.