Abstract:
A system receives digital images of a geographic location, associates each digital image with ground control points in a set of reference stereo images, and associates each digital image to each other digital image via image to image tiepoints. The system updates a geometry of each image via a bundle adjustment, and uses a prioritized stacking order to establish piecewise linear seam lines between each of the images. The system finally builds a prioritized map in a mosaic space specifying the source image pixels that are used in each region of the output mosaic, and forms the mosaic image using the prioritized map.
Abstract:
A system identifies strong features in conjugate digital images and correlates the conjugate digital images for an identification of candidate tie points using only portions of the conjugate digital images that include the strong features.
Abstract:
A method can include identifying a geolocation of an object in an image, the method comprising receiving data indicating a pixel coordinate of the image selected by a user, identifying a data point in a targetable three-dimensional (3D) data set corresponding to the selected pixel coordinate, and providing a 3D location of the identified data point.
Abstract:
A system and methods can create a synthetic image of a target from a 3D data set, by using an electro-optical (EO) image and sun geometry associated with the EO image. In some examples, a 3D surface model is created from a 3D data set. The 3D surface model establishes a local surface orientation at each point in the 3D data set. A surface shaded relief (SSR) is produced from the local surface orientation, from an EO image, and from sun geometry associated with the EO image. Points in the SSR that are in shadows are shaded appropriately. The SSR is projected into the image plane of the EO image. Edge-based registration extracts tie points from the projected SSR. The 3D data set converts the tie points to ground control points. A geometric bundle adjustment aligns the EO image geometry to the 3D data set.
Abstract:
A method includes obtaining training data having first image pairs, where each of the first image pairs includes (i) a first training image and (ii) a first ground truth image. The method also includes training a machine learning model to generate realistic images using the first image pairs. The method further includes obtaining additional training data having second image pairs, where each of the second image pairs includes (i) a second training image and (ii) a second ground truth image. At least some of the images in the second image pairs are less aligned or of lower quality than at least some of the images in the first image pairs. In addition, the method includes continuing to train the machine learning model to generate the realistic images using the second image pairs.
Abstract:
Systems, devices, methods, and computer-readable media for horizon-based navigation. A method can include receiving image data corresponding to a geographical region in a field of view of an imaging unit and in which the device is situated, based on the received image data, generating, by the processing unit, an image horizon corresponding to a horizon of the geographical region and from a perspective of the imaging unit, projecting three-dimensional (3D) points of a 3D point set of the geographical region to an image space of the received image data resulting in a synthetic image, generating, by the processing unit, a synthetic image horizon of the synthetic image, and responsive to determining the image horizon sufficiently correlates with the synthetic image horizon, providing a location corresponding to a perspective of the synthetic image as a location of the processing unit.
Abstract:
Discussed herein are devices, systems, and methods for merging point cloud data with error propagation. A can include reducing a sum aggregate of discrepancies between respective tie points and associated 3D points in first and second 3D images, adjusting 3D error models of the first and second 3D images based on the reduced discrepancies to generate registered 3D images, and propagating an error of the first or second 3D images to the registered 3D image to generate error of the registered 3D images.
Abstract:
A process improves accuracy in mapping geodetic coordinates to image sensor coordinates via an image rational function. The image rational function is fitted to an image-to ground sensor model at an input grid of image u coordinates and image v coordinates, and further at geodetic longitudes, geodetic latitudes, and geodetic heights corresponding to the image u coordinates and the image v coordinates. This fitting generates fit residuals. The fit residuals are stored as a function of the image u coordinates, the image v coordinates, and the geodetic height coordinates. The fit residuals are applied to metadata associated with the image rational function. This application corrects for a residual error in a fit of the image rational function to the image-to-ground sensor model.
Abstract:
A system and method of generating point clouds from passive images. Image clusters are formed, wherein each image cluster includes two or more passive images selected from a set of passive images. Quality of the point cloud that could be generated from each image cluster is predicted for each image cluster based on a performance prediction score for each image cluster. A subset of image clusters is selected for further processing based on their performance prediction scores. A mission-specific quality score is determined for each point cloud generated and the point cloud with the highest quality score is selected for storage.
Abstract:
A system and a method for processing multi-linear image data by measuring a relative oscillatory motion from a first-imaged array of the multi-linear optical array to a second-imaged array of the multi-linear optical array as a first function in time domain via image correlation; transforming the first function from the time domain to a second function in frequency domain; converting real and the imaginary parts of the second function to polar coordinates to generate a magnitude and a phase; correcting the polar coordinates from the second function in the frequency domain to generate a third function; converting the third function to rectangular coordinates to generate a fourth function in the frequency domain; and transforming the fourth function from the frequency domain to a fifth function in the time domain.