Abstract:
A process identifies features in a probe image and a donor image. A similarity measure matches the features in the probe image with features in the donor image, and forms pairs of matched features. The process then forms clusters of the pairs based on the pairs occupying a similar location in the probe image, and verifies that the clusters in the probe image are good fits for corresponding features in the donor image. Locations of the clusters and locations of the corresponding features are marked, and the extent to which the clusters and the corresponding features represent the same semantic class. The process calculates a score based on clusters having the good fit and the clusters in the first digital image having a similar semantic interpretation as the corresponding cluster in the second digital image.
Abstract:
An image forensics system estimates a camera response function (CRF) associated with a digital image, and compares the estimated CRF to a set of rules and compares the estimated CRF to a known CRF. The known CRF is associated with a make and a model of an image sensing device. The system applies a fusion analysis to results obtained from comparing the estimated CRF to a set of rules and from comparing the estimated CRF to the known CRF, and assesses the integrity of the digital image as a function of the fusion analysis.
Abstract:
A method includes obtaining a gaze feature of a user of a device, wherein the device has already been unlocked using a second feature, the gaze feature being based on images of a pupil relative to a display screen of the device, comparing the obtained gaze feature to known gaze features of an authorized user of the device, and determining whether or not the user is authorized to use the device based on the comparison.
Abstract:
A method that includes using a point spread function to de-blur an original motion invariant image to create a modified motion invariant image; using an edge detector to find edges in the modified motion invariant image; determining the distances between the edges and corresponding artifacts in the modified motion invariant image; using the distances between the edges and the corresponding artifacts to estimate a velocity of an object in the modified motion invariant image; generating a corrected point spread function corresponding to the estimated velocity of the object; and using the corrected point spread function to de-blur the original motion invariant image and create a resulting image.
Abstract:
Multiple classifiers can be applied independently to evaluate images or video. Where there are heavily imbalanced class distributions, a local expert forest model for meta-level score fusion for event detection can be used. Performance variations of classifiers in different regions of a score space can be adapted. Multiple pairs of experts based on different partitions, or “trees,” can form a “forest,” balancing local adaptivity and over-fitting. Among ensemble learning methods, stacking with a meta-level classifier can be used to fuse an output of multiple base-level classifiers to generate a final score. A knowledge-transfer framework can reutilize the base-training data for learning the meta-level classifier. By recycling the knowledge obtained during a base-classifier-training stage, efficient use can be made of all available information, such as can be used to achieve better fusion and better overall performance.
Abstract:
Motion de-blurring systems and methods are described herein. One motion de-blurring system includes an image sensing element, one or more motion sensors in an imaging device, a lens element that undergoes motion during a capture of an image by the sensing element, and a de-blurring element to de-blur the image captured by the sensing element via de-convolving a Point Spread Function (PSF).
Abstract:
Cargo presence detection devices, systems, and methods are described herein. One cargo presence detection system includes one or more sensors positioned in an interior space of a container, and arranged to collect background image data about at least a portion of the interior space of the container and updated image data about the portion of the interior space of the container and a detection component that receives the image data from the one or more sensors and identifies if one or more cargo items are present in the interior space of the container based on analysis of the background and updated image data.
Abstract:
A system assesses the integrity of a digital image by detecting an edge in the digital image and defining a patch of pixels encompassing the edge. The system then generates data relating to intensity and gradient magnitude for pixels in the patch, analyzes the data relating to intensity and gradient magnitude, and determines that the digital image has been forged or the digital image has not been forged based on the analysis of the data relating to intensity and gradient magnitude.
Abstract:
An image forensics system estimates a camera response function (CRF) associated with a digital image, and compares the estimated CRF to a set of rules and compares the estimated CRF to a known CRF. The known CRF is associated with a make and a model of an image sensing device. The system applies a fusion analysis to results obtained from comparing the estimated CRF to a set of rules and from comparing the estimated CRF to the known CRF, and assesses the integrity of the digital image as a function of the fusion analysis.
Abstract:
Methods, devices, and systems for cross-sensor iris matching are described herein. One method includes capturing a first image of an iris using a first sensor, capturing a second image of an iris using a second sensor, and determining whether the iris in the first image matches the iris in the second image based on characteristics of the first sensor and the second sensor and image quality of the first image and the second image.