Abstract:
A mobile device determines a vision based pose using images captured by a camera and determines a sensor based pose using data from inertial sensors, such as accelerometers and gyroscopes. The vision based pose and sensor based pose are used separately in a visualization application, which displays separate graphics for the different poses. For example, the visualization application may be used to calibrate the inertial sensors, where the visualization application displays a graphic based on the vision based pose and a graphic based on the sensor based pose and prompts a user to move the mobile device in a specific direction with the displayed graphics to accelerate convergence of the calibration of the inertial sensors. Alternatively, the visualization application may be a motion based game or a photography application that displays separate graphics using the vision based pose and the sensor based pose.
Abstract:
An accelerometer in a mobile device is calibrated by taking multiple measurements of acceleration vectors when the mobile device is held stationary at different orientations with respect to a plane normal. A circle is calculated that fits respective tips of measured acceleration vectors in the accelerometer coordinate system. The radius of the circle and the lengths of the measured acceleration vectors are used to calculate a rotation angle for aligning the accelerometer coordinate system with the mobile device surface. A gyroscope in the mobile device is calibrated by taking multiple measurements of a rotation axis when the mobile device is rotated at different rates with respect to the rotation axis. A line is calculated that fits the measurements. The angle between the line and an axis of the gyroscope coordinate system is used to align the gyroscope coordinate system with the mobile device surface.
Abstract:
Embodiments include devices and methods for automatically calibrating a camera. In various embodiments, an image sensor may capture an image. Locations of one or more points including in the captured image frames may be predicted and detected. Calibration parameters may be calculated based on differences between predicted locations of a selected point within an image frame and observed locations of the selected point within the captured image frame. The automatic camera calibration method may be repeated until the calibration parameters satisfy a calibration quality threshold.
Abstract:
A mobile device determines a vision based pose using images captured by a camera and determines a sensor based pose using data from inertial sensors, such as accelerometers and gyroscopes. The vision based pose and sensor based pose are used separately in a visualization application, which displays separate graphics for the different poses. For example, the visualization application may be used to calibrate the inertial sensors, where the visualization application displays a graphic based on the vision based pose and a graphic based on the sensor based pose and prompts a user to move the mobile device in a specific direction with the displayed graphics to accelerate convergence of the calibration of the inertial sensors. Alternatively, the visualization application may be a motion based game or a photography application that displays separate graphics using the vision based pose and the sensor based pose.
Abstract:
A mobile station improves its position estimate using dead reckoning and wireless signal distance estimates. The mobile station calculates a first round trip time (RTT) based distance at a first mobile station position between the first mobile station position and an access point. The mobile station moves to a second position and calculates a dead reckoning transition distance between the first mobile station position and the second mobile station position. The mobile station calculates a wireless signal transition distance between the first mobile station position and the second mobile station position based on a second RTT-based distance calculated between the access point and the second mobile station position. The mobile station computes an uncertainty associated with the first RTT-based distance and/or the second RTT-based distance using the dead reckoning transition distance and the wireless signal transition distance. The mobile station can correct the first RTT-based distance or the second RTT-based distanced based on comparing the dead reckoning transition distance with the wireless signal transition distance.
Abstract:
A first map comprising local features and 3D locations of the local features is generated, the local features comprising visible features in a current image and a corresponding set of covisible features. A second map comprising prior features and 3D locations of the prior features may be determined, where each prior feature: was first imaged at a time prior to the first imaging of any of the local features, and lies within a threshold distance of at least one local feature. A first subset comprising previously imaged local features in the first map and a corresponding second subset of the prior features in the second map is determined by comparing the first and second maps, where each local feature in the first subset corresponds to a distinct prior feature in the second subset. A transformation mapping a subset of local features to a subset of prior features is determined.
Abstract:
An accelerometer in a mobile device is calibrated by taking multiple measurements of acceleration vectors when the mobile device is held stationary at different orientations with respect to a plane normal. A circle is calculated that fits respective tips of measured acceleration vectors in the accelerometer coordinate system. The radius of the circle and the lengths of the measured acceleration vectors are used to calculate a rotation angle for aligning the accelerometer coordinate system with the mobile device surface. A gyroscope in the mobile device is calibrated by taking multiple measurements of a rotation axis when the mobile device is rotated at different rates with respect to the rotation axis. A line is calculated that fits the measurements. The angle between the line and an axis of the gyroscope coordinate system is used to align the gyroscope coordinate system with the mobile device surface.
Abstract:
An accelerometer located within a mobile device is used to estimate a gravity vector on a target plane in a world coordinate system. The accelerometer makes multiple measurements, each measurement being taken when the mobile device is held stationary on the target plane and a surface of the mobile device faces and is in contact with a planar portion of the target plane. An average of the measurements is calculated. A rotational transformation between an accelerometer coordinate system and a mobile device's coordinate system is retrieved from a memory in the mobile device, where the mobile device's coordinate system is aligned with the surface of the mobile device. The rotational transformation is applied to the averaged measurements to obtain an estimated gravity vector in a world coordinate system defined by the target plane.
Abstract:
This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method of image processing includes receiving image data from an image sensor; extracting point features from light detection and ranging (LiDAR) data; partitioning the point features; performing BEV-feature pooling based on the partitioned point features; determining lane-boundary heads based on the BEV-feature pooling, wherein the determining comprises row-wise classification with at least one of: offset correction regression processing; or vertex-wise height regression processing. Other aspects and features are also claimed and described.
Abstract:
Embodiments disclosed pertain to the use of user equipment (UE) for the generation of a 3D exterior envelope of a structure based on captured images and a measurement set associated with each captured image. In some embodiments, a sequence of exterior images of a structure is captured and a corresponding measurement set comprising Inertial Measurement Unit (IMU) measurements, wireless measurements (including Global Navigation Satellite (GNSS) measurements) and/or other non-wireless sensor measurements may be obtained concurrently. A closed-loop trajectory of the UE in global coordinates may be determined and a 3D structural envelope of the structure may be obtained based on the closed loop trajectory and feature points in a subset of images selected from the sequence of exterior images of the structure.