Abstract:
An image processing method for measuring displacement of an object comprising is provided. The method includes acquiring first sequential images and second sequential images, wherein two adjacent images of the first sequential images include first overlap portions, wherein two adjacent images of the second sequential images include second overlap portions, wherein the first sequential images correspond to a first three dimensional (3D) surface on the object at a first state and the second sequential images correspond to a second 3D surface on the object at a second state. The method further includes deblurring the first sequential and second sequential images to obtain sharp focal plane images based on a blind deconvolution method, stitching the sharpened first sequential images and the sharpened second sequential images into a first sharp 3D image and a second sharp 3D image based on camera pose estimations by solving a perspective-n-point (PnP) problem using a refined robust weighted Levenberg Marquardt (RRWLM) algorithm, respectively. The method further comprises forming a first two-dimensional (2D) image and a second 2D image by unfolding, respectively, the first sharp 3D image and the second sharp 3D, and generating a displacement(strain) map image from the first 2D and second 2D images by performing a two-dimensional digital image correction (DIC) method.
Abstract:
Systems and methods for multiple image registration of images of a scene or an object. Receiving image data, the image data includes images collected from different measurements of a single modality or multiple modalities, either at different rotation angles, horizontal shifts, or vertical shifts, of the scene or the object. Estimating registration parameters, using pairs of images, each pair of images includes a reference image and a floating image. Generating parameter matrices corresponding to registration parameters using an image registration process for all pairs of images. Decomposing each parameter matrix into a low-rank matrix of updated registration parameters and a sparse matrix corresponding to the registration parameter errors for each low-rank matrix, to obtain updated registration parameters for robust registration. Using the updated registration parameters to form a transformation matrix to register the images with at least one reference image, resulting in robust registration of the images.
Abstract:
Systems and methods for multiple image registration of images of a scene or an object. Receiving image data, the image data includes images collected from different measurements of a single modality or multiple modalities, either at different rotation angles, horizontal shifts, or vertical shifts, of the scene or the object. Estimating registration parameters, using pairs of images, each pair of images includes a reference image and a floating image. Generating parameter matrices corresponding to registration parameters using an image registration process for all pairs of images. Decomposing each parameter matrix into a low-rank matrix of updated registration parameters and a sparse matrix corresponding to the registration parameter errors for each low-rank matrix, to obtain updated registration parameters for robust registration. Using the updated registration parameters to form a transformation matrix to register the images with at least one reference image, resulting in robust registration of the images.
Abstract:
Systems, methods and apparatus for image processing for reconstructing a super resolution (SR) image from multispectral (MS) images. A processor to iteratively, fuse a MS image with an associated PAN image of the scene. Each iteration includes using a gradient descent (GD) approach with a learned forward operator, to generate an intermediate high-resolution multispectral (IHRMS) image with an increased spatial resolution and a smaller error to the DSRMS image compared to the stored MS image. Project the IHRMS image using a trained convolutional neural network (CNN) to obtain an estimated synthesized high-resolution multispectral (ESHRMS) image, for a first iteration. Use the ESHRMS image and the PAN image, as an input to the GD approach for following iterations. The updated IHRMS image is an input to another trained CNN for the following iterations. After predetermined number of iterations, output the fused high-spatial and high-spectral resolution MS image.
Abstract:
Systems and methods for a radar system to produce a radar image of a region of interest (ROI). A set of antennas to transmit radar pulses to the ROI and to measure a set of reflections from the ROI corresponding to the transmitted radar pulses. A processor acquires an estimate of the radar image, by matching the reflections of the ROI measurements for each antenna. Determine a set of shifts of the radar image. Wherein each shift corresponds to an antenna, and is caused by an uncertainty in a position of the antenna. Update the estimate of the radar image, based on the determined set of shifts of the radar image. Wherein for each antenna, the estimate of the radar image is shifted by the determined shift of the radar image corresponding to the antenna, that fits the reflections of the ROI measurements of the antenna.
Abstract:
A method and system generates a three-dimensional (3D) image by first acquiring data from a scene using multiple parallel baselines and multiple different pulse repetition frequencies (PRF), wherein the multiple baselines are arranged in a hyperplane. Then, a 3D compressive sensing reconstruction procedure is applied to the data to generate the 3D image corresponding to the scene.
Abstract:
A system and method determines a noise free image of a scene located behind a wall. A transmit antenna emits a radar pulse from different locations in front of the wall, wherein the radar pulses propagate through the wall and are reflected by the scene as echoes. A set of stationary receive antennas acquire the echoes corresponding to each pulse transmitted from each different location. A radar imaging subsystem connected to the transmit antenna and the set of receive antennas determines a noisy image of the scene for each location of the transmit antenna. A total variation denoiser denoises each noisy image to produce a corresponding denoised image. A combiner combines incoherently the denoised images to produce the noise free image.
Abstract:
An image of a region of interest (ROI) is generated by a radar system including a set of one or more antennas. The radar system has unknown position perturbations. Pulses are transmitted, as a source signal, to the ROI using the set of antennas at different positions and echoes are received, as a reflected signal, by the set of antennas at the different positions. The reflected signal is deconvolved with the source signal to produce deconvolved data. The deconvolved data are compensated according a coherence between the reflected signal to produce compensated data. Then, a procedure is applied to the compensated data to produce reconstructed data, which are used to reconstruct auto focused images.
Abstract:
A method reconstructs a scene behind a wall by transmitting a signal through the wall into the scene. Parameters of the wall are estimated from a reflected signal. A model of a permittivity of the wall is generated using the parameters, and then the scene is reconstructed as an image from the reflected signal using the model and sparse recovery.
Abstract:
A spotlight synthetic aperture radar (SAR) image is generated by directing randomly a beam of transmitted pulses at a set of two or more areas using a steerable array of antennas. Each area is illuminated by an approximately equal number of the transmitted pulses. Then, a reconstruction procedure is applied independently to received signals from each area due to reflecting the transmitted pulses to generate the image corresponding to the set of areas.