Abstract:
Comparing extracted card data from a continuous scan comprises receiving, by one or more computing devices, a digital scan of a card; obtaining a plurality of images of the card from the digital scan of the physical card; performing an optical character recognition algorithm on each of the plurality of images; comparing results of the application of the optical character recognition algorithm for each of the plurality of images; determining if a configured threshold of the results for each of the plurality of images match each other; and verifying the results when the results for each of the plurality of images match each other. Threshold confidence level for the extracted card data can be employed to determine the accuracy of the extraction. Data is further extracted from blended images and three-dimensional models of the card. Embossed text and holograms in the images may be used to prevent fraud.
Abstract:
Extracting financial card information with relaxed alignment comprises a method to receive an image of a card, determine one or more edge finder zones in locations of the image, and identify lines in the one or more edge finder zones. The method further identifies one or more quadrilaterals formed by intersections of extrapolations of the identified lines, determines an aspect ratio of the one or more quadrilateral, and compares the determined aspect ratios of the quadrilateral to an expected aspect ratio. The method then identifies a quadrilateral that matches the expected aspect ratio and performs an optical character recognition algorithm on the rectified model. A similar method is performed on multiple cards in an image. The results of the analysis of each of the cards are compared to improve accuracy of the data.
Abstract:
Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
Abstract:
Capturing information from payment instruments comprises receiving, using one or more computer devices, an image of a back side of a payment instrument, the payment instrument comprising information imprinted thereon such that the imprinted information protrudes from a front side of the payment instrument and the imprinted information is indented into the back side of the payment instrument; extracting sets of characters from the image of the back side of the payment instrument based on the imprinted information indented into the back side of the payment instrument and depicted in the image of the back side of the payment instrument; applying a first character recognition application to process the sets of characters extracted from the image of the back side of the payment instrument; and categorizing each of the sets of characters into one of a plurality of categories relating to information required to conduct a payment transaction.
Abstract:
Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
Abstract:
A computer-implemented method of providing personalized route information involves gathering a plurality of past location indicators over time for a wireless client device, determining a future driving objective using the plurality of previously-gathered location indicators, obtaining real-time traffic data for an area proximate to the determined driving objective, and generating a suggested route for the driving objective using the near real-time traffic data.
Abstract:
An application extracts a user name from a financial card image using optical character recognition (“OCR”) and compares segments of the user name to names stored in user data to refine the extracted name. The application performs an OCR algorithm on a card image and compares an extracted name with user data. The application identifies likely matching names to the extracted name. The OCR application breaks the extracted name into one or more series of segments and compares the segments from the extracted name to segments from the stored names. The OCR application determines an edit distance between the extracted name and each potentially matching stored name. If the edit distance is below a configured threshold then the OCR application revises the extracted name to match the identified stored name. The refined name is presented to the user for verification.
Abstract:
Extracting card data comprises receiving, by one or more computing devices, a digital image of a card; perform an image recognition process on the digital representation of the card; identifying an image in the digital representation of the card; comparing the identified image to an image database comprising a plurality of images and determining that the identified image matches a stored image in the image database; determining a card type associated with the stored image and associating the card type with the card based on the determination that the identified image matches the stored image; and performing a particular optical character recognition algorithm on the digital representation of the card, the particular optical character recognition algorithm being based on the determined card type. Another example uses an issuer identification number to improve data extraction. Another example compares extracted data with user data to improve accuracy.
Abstract:
A computer-implemented method for generating results for a client-requested query involves receiving a query produced by a client communication device, generating a result for the query in response to reception of the query, determining one or more predictive follow-up requests before receiving an actual follow-up request from the client device, and initiating retrieval of information associated with the one or more predictive follow-up requests, and transmitting at least part of the result to the client device, and then transmitting to the client device at least part of the information associated with the one or more predictive follow-up requests.
Abstract:
Methods and systems for recognizing Devanagari script handwriting are provided. A method may include receiving a handwritten input and determining that the handwritten input comprises a shirorekha stroke based on one or more shirorekha detection criteria. Shirorekha detection criteria may be at least one criterion such as a length of the shirorekha stroke, a horizontality of the shirorekha stroke, a straightness of the shirorekha stroke, a position in time at which the shirorekha stroke is made in relation to one or more other strokes in the handwritten input, and the like. Next, one or more recognized characters may be provided corresponding to the handwritten input.