Abstract:
Methods and apparatus relating to enabling augmented reality applications using eye gaze tracking are disclosed. An exemplary method according to the disclosure includes displaying an image to a user of a scene viewable by the user, receiving information indicative of an eye gaze of the user, determining an area of interest within the image based on the eye gaze information, determining an image segment based on the area of interest, initiating an object recognition process on the image segment, and displaying results of the object recognition process.
Abstract:
Techniques are disclosed for estimating one or more parameters in a system. A device obtains measurements corresponding to a first set of features and a second set of features. The device estimates the parameters using an extended Kalman filter based on the measurements corresponding to the first set of features and the second set of features. The measurements corresponding to the first set of features are used to update the one or more parameters, and information corresponding to the first set of features. The measurements corresponding to the second set of features are used to update the parameters and uncertainty corresponding to the parameter. In on example, information corresponding to the second set of features is not updated during the estimating. Moreover, the parameters are estimated without projecting the information corresponding to the second set of features into a null-space.
Abstract:
Embodiments of the present invention are directed toward providing intelligent sampling strategies that make efficient use of an always-on camera. To do so, embodiments can utilize sensor information to determine contextual information regarding the mobile device and/or a user of the mobile device. A sampling rate of the always-on camera can then be modulated based on the contextual information.
Abstract:
Methods and apparatus relating to enabling augmented reality applications using eye gaze tracking are disclosed. An exemplary method according to the disclosure includes displaying an image to a user of a scene viewable by the user, receiving information indicative of an eye gaze of the user, determining an area of interest within the image based on the eye gaze information, determining an image segment based on the area of interest, initiating an object recognition process on the image segment, and displaying results of the object recognition process.
Abstract:
Embodiments disclosed pertain to the use of user equipment (UE) for the generation of a 3D exterior envelope of a structure based on captured images and a measurement set associated with each captured image. In some embodiments, a sequence of exterior images of a structure is captured and a corresponding measurement set comprising Inertial Measurement Unit (IMU) measurements, wireless measurements (including Global Navigation Satellite (GNSS) measurements) and/or other non-wireless sensor measurements may be obtained concurrently. A closed-loop trajectory of the UE in global coordinates may be determined and a 3D structural envelope of the structure may be obtained based on the closed loop trajectory and feature points in a subset of images selected from the sequence of exterior images of the structure.
Abstract:
Systems and methods for performing localization and mapping with a mobile device are disclosed. In one embodiment, a method for performing localization and mapping with a mobile device includes identifying geometric constraints associated with a current area at which the mobile device is located, obtaining at least one image of the current area captured by at least a first camera of the mobile device, obtaining data associated with the current area via at least one of a second camera of the mobile device or a sensor of the mobile device, and performing localization and mapping for the current area by applying the geometric constraints and the data associated with the current area to the at least one image.
Abstract:
Systems, apparatus and methods in a mobile device to enable and disable a depth sensor for tracking pose of the mobile device are presented. A mobile device relaying on a camera without a depth sensor may provide inadequate pose estimates, for example, in low light situations. A mobile device with a depth sensor uses substantial power when the depth sensor is enabled. Embodiments described herein enable a depth sensor only when images are expected to be inadequate, for example, accelerating or moving too fast, when inertial sensor measurements are too noisy, light levels are too low or high, an image is too blurry, or a rate of images is too slow. By only using a depth sensor when images are expected to be inadequate, battery power in the mobile device may be conserved and pose estimations may still be maintained.
Abstract:
Embodiments disclosed pertain to systems, method s and apparatus for the initialization of Computer Vision (CV) applications on user devices (UDs) comprising a camera and a display. In some embodiments, an optimal camera trajectory for initialization of a Computer Vision (CV) application may be determined based on an initial camera pose and an estimated pivot distance. For example, the initial camera pose may be estimated based on a first image captured by the camera. Further, the display may be updated in real-time with an indication of a desired movement direction for the camera. In some embodiments, the indication of desired movement direction may be based, in part, on a current camera pose and the optimal trajectory, where the current camera pose may be estimated based on a current image captured by the camera.
Abstract:
An apparatus and method for generating parameters for an application, such as an augmented reality application (AR app), using camera pose and gyroscope rotation is disclosed. The parameters are estimated based on pose from images and rotation from a gyroscope (e.g., using least-squares estimation with QR factorization or a Kalman filter). The parameters indicate rotation, scale and/or non-orthogonality parameters and optionally gyroscope bias errors. In addition, the scale and non-orthogonality parameters may be used for conditioning raw gyroscope measurements to compensate for scale and non-orthogonality.