Abstract:
A method includes: receiving or capturing an image comprising an identity document (ID) using a mobile device; classifying the ID; analyzing the ID based at least in part on the ID classification; determining at least some identifying information from the ID; at least one of building an ID profile and updating the ID profile, based at least in part on the analysis; providing at least one of the ID and the ID classification to a loan application workflow and/or a new financial account workflow; and driving at least a portion of the workflow based at least in part on the ID and the ID classification. Corresponding systems and computer program products are also disclosed.
Abstract:
Techniques for improved binarization and extraction of information from digital image data are disclosed in accordance with various embodiments. The inventive concepts include independently binarizing portions of the image data on the basis of individual features, e.g. per connected component, and using multiple different binarization thresholds to obtain the best possible binarization result for each portion of the image data independently binarized. Determining the quality of each binarization result may be based on attempted recognition and/or extraction of information therefrom. Independently binarized portions may be assembled into a contiguous result. In one embodiment, a method includes: identifying a region of interest within a digital image; generating a plurality of binarized images based on the region of interest using different binarization thresholds; and extracting data from some or all of the plurality of binarized images. Corresponding systems and computer program products are also disclosed.
Abstract:
A method includes: displaying a digital image on a first portion of a display of a mobile device; receiving user feedback via the display of the mobile device; analyzing the user feedback to determine a meaning of the user feedback; based on the determined meaning of the user feedback, analyzing a portion of the digital image corresponding to either the point of interest or the region of interest to detect one or more connected components depicted within the portion of the digital image; classifying each detected connected component depicted within the portion of the digital image; estimating an identity of each detected connected component based on the classification of the detected connected component; and one or more of: displaying the identity of each detected connected component on a second portion of the display of the mobile device; and providing the identity of each detected connected component to a workflow.
Abstract:
Computer program products for discriminating hand and machine print from each other, and from signatures, are disclosed and include program code readable and/or executable by a processor to: receive an image, determine a color depth of the image; reducing the color depth of non-bi-tonal images to generate a bi-tonal representation of the image; identify a set of one or more graphical line candidates in either the bi-tonal image or the bi-tonal representation, the graphical line candidates including true graphical lines and/or false positives; discriminate any of the true graphical lines from any of the false positives; remove the true graphical lines from the bi-tonal image or the bi-tonal representation without removing the false positives to generate a component map comprising connected components and excluding graphical lines; identify one or more of the connected components in the component map; and output and/or display and indicator of each of the connected components.
Abstract:
Techniques for capturing long document images and generating composite images therefrom include: detecting a document depicted in image data; tracking a position of the detected document within the image data; selecting a plurality of images, wherein the selection is based at least in part on the tracked position of the detected document; and generating a composite image based on at least one of the selected plurality of images. The tracking and selection are optionally but preferably based in whole or in part on motion vectors estimated at least partially based on analyzing image data such as test and reference frames within the captured video data/images. Corresponding systems and computer program products are also disclosed.
Abstract:
In one embodiment, a method includes receiving, at a mobile device, an image depicting a document; attempting to classify, using a processor of the mobile device, the document depicted in the image to one of a plurality of predetermined document classes, wherein attempting to classify the document results in an ambiguous classification result; determining, using the mobile device, location information identifying a geographic location of the mobile device at a particular time; and disambiguating, using the processor of the mobile device, the ambiguous classification result based on the location information. Exemplary systems and computer program products are also described.
Abstract:
According to one embodiment, a computer-implemented method includes: capturing an image of a document using a camera of a mobile device; performing optical character recognition (OCR) on the image of the document; extracting an identifier of the document from the image based at least in part on the OCR; comparing the identifier with content from one or more reference data sources, wherein the content from the one or more reference data sources comprises global address information; and determining whether the identifier is valid based at least in part on the comparison. The method may optionally include normalizing the extracted identifier, retrieving additional geographic information, correcting OCR errors, etc. based on comparing extracted information with reference content. Corresponding systems and computer program products are also disclosed.
Abstract:
Systems, methods, and computer program products are disclosed and include: initiating a capture operation using an image capture component of the mobile device, the capture operation comprising; capturing video data; and estimating a plurality of motion vectors corresponding to motion of the image capture component during the capture operation. The systems, techniques, and computer program products also include detecting a document depicted in the video data; tracking a position of the detected document throughout the video data; selecting a plurality of images using the image capture component of the mobile device, wherein the selection is based at least in part on: the tracked position of the detected document; and the estimated motion vectors; and generating a composite image based on at least some of the selected plurality of images.
Abstract:
Systems, methods, and computer program products for smart, automated capture of textual information using optical sensors of a mobile device are disclosed. The textual information is provided to a mobile application or workflow without requiring the user to manually enter or transfer the data without requiring user intervention such as a copy/paste operation. The capture and provision context-aware, and can normalize or validate the captured textual information prior to entry in the workflow or mobile application. Other information necessary by the workflow and available to the mobile device optical sensors may also be captured and provided, in a single automatic process. As a result, the overall process of capturing information from optical input using a mobile device is significantly simplified and improved in terms of accuracy of data transfer/entry, speed and efficiency of workflows, and user experience.
Abstract:
Systems, methods, and computer program products for capturing and analyzing image data, preferably video data, are disclosed. The inventive concepts include using multiple frames of image data to generate a composite image, where the composite image may be characterized by a higher resolution than one or more of the individual frames used to generate the composite image, and/or absence of a blurred region present in one or more of the individual frames. Inventive techniques also include determining a minimum capture resolution appropriate for capturing images of particular objects for downstream processing, and optionally triggering generation of a composite image having sufficient resolution to facilitate the downstream processing in response to detecting one or more frames of image data are characterized by a resolution, and/or a region having a resolution, less than the minimum capture resolution appropriate for capturing images of those particular objects.