Abstract:
Apparatuses and methods for fast visual simultaneous localization and mapping are described. In one embodiment, a three-dimensional (3D) target is initialized immediately from a first reference image and prior to processing a subsequent image. In one embodiment, one or more subsequent reference images are processed, and the 3D target is tracked in six degrees of freedom. In one embodiment, the 3D target is refined based on the processed the one or more subsequent images.
Abstract:
Disclosed are a system, apparatus, and method for depth camera image re-mapping. A depth camera image from a depth camera may be received and a depth camera's physical position may be determined The depth camera's physical position may be determined relative to an other physical position, such as the physical position of a color camera. The depth camera image may be transformed according to a processing order associated with the other physical position.
Abstract:
Embodiments disclosed facilitate resource utilization efficiencies in Mobile Stations (MS) during 3D reconstruction. In some embodiments, camera pose information for a first color image captured by a camera on an MS may be obtained and a determination may be made whether to extend or update a first 3-Dimensional (3D) model of an environment being modeled by the MS based, in part, on the first color image and associated camera pose information. The depth sensor, which provides depth information for images captured by the camera, may be disabled, when the first 3D model is not extended or updated.
Abstract:
Disclosed are a system, apparatus, and method for depth camera image re-mapping. A depth camera image from a depth camera may be received and a depth camera's physical position may be determined. The depth camera's physical position may be determined relative to an other physical position, such as the physical position of a color camera. The depth camera image may be transformed according to a processing order associated with the other physical position.
Abstract:
One disclosed example method for view independent color equalized 3D scene texturing includes capturing a plurality of keyframes of an object; accessing a 3D representation of the object comprising a surface mesh model for the object, the surface mesh model comprising a plurality of polygons; for each polygon, assigning one of the plurality of keyframes to the polygon based on one or more image quality characteristics associated with a portion of the keyframe corresponding to the polygon; reducing a number of assigned keyframes by changing associations between assigned keyframes; and for each polygon of the surface mesh model having an assigned keyframe: equalizing a texture color of at least a portion of the polygon based at least in part on one or more image quality characteristics of the plurality of keyframes associated with the polygon; and assigning the equalized texture color to the 3D representation of the object.
Abstract:
Embodiments disclosed facilitate resource utilization efficiencies in Mobile Stations (MS) during 3D reconstruction. In some embodiments, camera pose information for a first color image captured by a camera on an MS may be obtained and a determination may be made whether to extend or update a first 3-Dimensional (3D) model of an environment being modeled by the MS based, in part, on the first color image and associated camera pose information. The depth sensor, which provides depth information for images captured by the camera, may be disabled, when the first 3D model is not extended or updated.
Abstract:
Apparatuses and methods for fast visual simultaneous localization and mapping are described. In one embodiment, a three-dimensional (3D) target is initialized immediately from a first reference image and prior to processing a subsequent image. In one embodiment, one or more subsequent reference images are processed, and the 3D target is tracked in six degrees of freedom. In one embodiment, the 3D target is refined based on the processed the one or more subsequent images.
Abstract:
A method of determining a reference coordinate system includes: obtaining information indicative of a direction of gravity relative to a device; and converting an orientation of a device coordinate system using the direction of gravity relative to the device to produce the reference coordinate system. The method may also include setting an origin of the reference coordinate system and/or determining a scale value of the reference coordinate system. The method may also include refining the reference coordinate system.
Abstract:
Embodiments disclosed facilitate resource utilization efficiencies in Mobile Stations (MS) during 3D reconstruction. In some embodiments, camera pose information for a first color image captured by a camera on an MS may be obtained and a determination may be made whether to extend or update a first 3-Dimensional (3D) model of an environment being modeled by the MS based, in part, on the first color image and associated camera pose information. The depth sensor, which provides depth information for images captured by the camera, may be disabled, when the first 3D model is not extended or updated.
Abstract:
Embodiments disclosed pertain to systems, method s and apparatus for the initialization of Computer Vision (CV) applications on user devices (UDs) comprising a camera and a display. In some embodiments, an optimal camera trajectory for initialization of a Computer Vision (CV) application may be determined based on an initial camera pose and an estimated pivot distance. For example, the initial camera pose may be estimated based on a first image captured by the camera. Further, the display may be updated in real-time with an indication of a desired movement direction for the camera. In some embodiments, the indication of desired movement direction may be based, in part, on a current camera pose and the optimal trajectory, where the current camera pose may be estimated based on a current image captured by the camera.