Abstract:
Disclosed are a system, apparatus, and method for depth and color camera image synchronization. Depth and color camera input images are received or otherwise obtained unsynchronized and without associated creation timestamps. An image of one type is compared with an image of a different type to determine a match for synchronization. Matches may be determined according to edge detection or depth coordinate detection. When a match is determined a synchronized pair is formed for processing within an augmented reality output. Optionally the synchronized pair may be transformed to improve the match between the image pair.
Abstract:
The noise in an image having text is removed by convolving a shaped kernel centered on a pixel for each pixel in the image. The shaped kernel has a shape configured to identify pixels that are not part of the text. For example, the shaped kernel may be shaped with zeros in a center of the kernel to identify pixels that are not part of the text. A value for the pixel is set to erase the pixel when the resulting convolution value for the pixel is less than a threshold. The process may be repeated multiple times for differently shaped kernels, including kernels of different sizes and different configurations, such as having values greater than one in at least one of a row, column, and diagonal.
Abstract:
A master device images an object device and uses the image to identify the object device. The master device then automatically interfaces with the identified object device, for example, by pairing with the object device. The master device interfaces with a second object device and initiates an interface between the first object device and the second object device. The master device may receive broadcast data from the object device including information about the visual appearance of the object device and use the broadcast data in the identification of the object device. The master device may retrieve data related to the object device and display the related data, which may be display the data over the displayed image of the object device. The master device may provide an interface to control the object device or be used to pass data to the object device.
Abstract:
Embodiments include detection or relocalization of an object in a current image from a reference image, such as using a simple and relatively fast and invariant edge orientation based edge feature extraction, then a weak initial matching combined with a strong contextual filtering framework, and then a pose estimation framework based on edge segments. Embodiments include fast edge-based object detection using instant learning with a sufficiently large coverage area for object re-localization. Embodiments provide a good trade-off between computational efficiency of the extraction and matching processes.
Abstract:
Method, apparatus, and computer program product for merging multiple maps for computer vision based tracking are disclosed. In one embodiment, a method of merging multiple maps for computer vision based tracking comprises receiving a plurality of maps of a scene in a venue from at least one mobile device, identifying multiple keyframes of the plurality of maps of the scene, and merging the multiple keyframes to generate a global map of the scene.