Abstract:
A method includes receiving or capturing a digital image using a mobile device, and using a processor of the mobile device to: determine whether an object depicted in the digital image belongs to a particular object class among a plurality of object classes; determine one or more object features of the object based at least in part on the particular object class at least partially in response to determining the object belongs to the particular object class; build or select an extraction model based at least in part on the one or more determined object features; and extract data from the digital image using the extraction model. The extraction model excludes, and/or the extraction process does not utilize, optical character recognition (OCR) techniques. Related systems and computer program products are also disclosed.
Abstract:
In various embodiments, methods, systems, and computer program products for capturing and processing digital images captured by a mobile device are disclosed. The claimed algorithms are specifically configured to perform and facilitate loan application processing by capturing an image of a document using a mobile device, and analyzing the image (optionally in conjunction with additional data that may also be captured, determined, or otherwise provided to the loan application process) to determine loan-relevant information. Select loan-relevant information may be extracted, compiled, and/or analyzed to facilitate processing of the loan application. Feedback may be provided to facilitate facile application processing, e.g. by ensuring all requisite information is submitted with the loan application. Image capture and document detection are preferably performed using the mobile device, while all other functions may be performed using the mobile device, a remote server, or some combination thereof.
Abstract:
A method involves: receiving an image comprising an ID; iteratively classifying the ID; and driving at least a portion of a workflow based at least in part on the classifying; wherein at least some of the classification iterations are based at least in part on comparing feature vector data, wherein a first classification iteration comprises determining the ID belongs to a particular class, and wherein each classification iteration subsequent to the first classification iteration comprises determining whether the ID belongs to a subclass falling within the particular class to which the ID was determined to belong in a prior classification iteration. Related systems and computer program products are also disclosed.
Abstract:
In various embodiments, methods, systems, and computer program products for processing digital images captured by a mobile device are disclosed. Myriad features enable and/or facilitate processing of such digital images using a mobile device that would otherwise be technically impossible or impractical, and furthermore address unique challenges presented by images captured using a camera rather than a traditional flat-bed scanner, paper-feed scanner or multifunction peripheral.
Abstract:
A method for leveraging location-based information to influence business workflows includes initiating a workflow, performing at least one operation within the workflow using a processor of a mobile device; receiving location information pertaining to the workflow; and influencing at least a portion of the workflow based on the location information. The workflow is configured to facilitate a business process. In some embodiments, the method includes determining location information corresponding to the mobile device, prompting a user to capture and image of a document, associating the location information with the captured image as location metadata, and storing the location metadata and captured image to a memory of the mobile device. Exemplary systems and computer program products are also described.
Abstract:
In one embodiment, a method includes receiving an image of a document; performing optical character recognition (OCR) on the image; extracting an address of a sender of the document from the image based on the OCR; comparing the extracted address with content in a first database; identifying complementary textual information in a second database based on the address; and at least one of: extracting additional content from the image of the document; correcting one or more OCR errors in the document using the complementary textual information, and normalizing data from the document prior to determining a validity of the document using at least one of the complementary textual information and predefined business rules. At least one of the aforementioned operations is performed using a processor of a mobile device. Exemplary systems and computer program products are also disclosed.
Abstract:
In various embodiments, methods, systems, and computer program products for processing digital images captured by a mobile device are disclosed. Myriad features enable and/or facilitate processing of such digital images using a mobile device that would otherwise be technically impossible or impractical, and furthermore address unique challenges presented by images captured using a camera rather than a traditional flat-bed scanner, paper-feed scanner or multifunction peripheral.
Abstract:
In various embodiments, methods, systems, and computer program products for processing digital images captured by a mobile device are disclosed. Myriad features enable and/or facilitate processing of such digital images using a mobile device that would otherwise be technically impossible or impractical, and furthermore address unique challenges presented by images captured using a camera rather than a traditional flat-bed scanner, paper-feed scanner or multifunction peripheral.
Abstract:
In various embodiments, methods, systems, and computer program products for processing digital images captured by a mobile device are disclosed. Myriad features enable and/or facilitate processing of such digital images using a mobile device that would otherwise be technically impossible or impractical, and furthermore address unique challenges presented by images captured using a camera rather than a traditional flat-bed scanner, paper-feed scanner or multifunction peripheral.
Abstract:
In several embodiments, methods, systems, and computer program products for processing digital images captured by a mobile device are disclosed. The techniques include detecting medical documents and/or documents relevant to an insurance claim by defining candidate edge points based on the captured image data and defining four sides of a tetragon based on at least some of the candidate edge points. In the case of an insurance claim process, the techniques also include determining whether the document is relevant to an insurance claim; and in response to determining the document is relevant to the insurance claim, submitting the image data, information extracted from the image data, or both to a remote server for claims processing. The image capture and processing techniques further facilitate processing of medical documents and/or insurance claims with a plurality of additional features that may be used individually or in combination in various embodiments.