Abstract:
Comparing extracted card data from a continuous scan comprises receiving, by one or more computing devices, a digital scan of a card; obtaining a plurality of images of the card from the digital scan of the physical card; performing an optical character recognition algorithm on each of the plurality of images; comparing results of the application of the optical character recognition algorithm for each of the plurality of images; determining if a configured threshold of the results for each of the plurality of images match each other; and verifying the results when the results for each of the plurality of images match each other. Threshold confidence level for the extracted card data can be employed to determine the accuracy of the extraction. Data is further extracted from blended images and three-dimensional models of the card. Embossed text and holograms in the images may be used to prevent fraud.
Abstract:
The technology of the present disclosure includes computer-implemented methods, computer program products, and systems to filter images before transmitting to a system for optical character recognition (“OCR”). A user computing device obtains a first image of the card from the digital scan of a physical card and analyzes features of the first image, the analysis being sufficient to determine if the first image is likely to be usable by an OCR algorithm. If the user computing device determines that the first image is likely to be usable, then the first image is transmitted to an OCR system associated with the OCR algorithm. Upon a determination that the first image is unlikely to be usable, a second image of the card from the digital scan of the physical card is analyzed. The optical character recognition system performs an optical character recognition algorithm on the filtered card.
Abstract:
Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
Abstract:
Extracting card data comprises receiving, by one or more computing devices, a digital image of a card; perform an image recognition process on the digital representation of the card; identifying an image in the digital representation of the card; comparing the identified image to an image database comprising a plurality of images and determining that the identified image matches a stored image in the image database; determining a card type associated with the stored image and associating the card type with the card based on the determination that the identified image matches the stored image; and performing a particular optical character recognition algorithm on the digital representation of the card, the particular optical character recognition algorithm being based on the determined card type. Another example superimposes the extracted data directly above, below, or beside the corresponding section on the displayed image.
Abstract:
Extracting financial card information with relaxed alignment comprises a method to receive an image of a card, determine one or more edge finder zones in locations of the image, and identify lines in the one or more edge finder zones. The method further identifies one or more quadrilaterals formed by intersections of extrapolations of the identified lines, determines an aspect ratio of the one or more quadrilateral, and compares the determined aspect ratios of the quadrilateral to an expected aspect ratio. The method then identifies a quadrilateral that matches the expected aspect ratio and performs an optical character recognition algorithm on the rectified model. A similar method is performed on multiple cards in an image. The results of the analysis of each of the cards are compared to improve accuracy of the data.
Abstract:
Identifying the geolocation of POS terminals using non-payment events to predict when the geolocation of a computing device at a time when the device detects events corresponds to the geolocation of the terminal. The device monitors for pre-selected events and transmit data to the account system. The account system determines a frequency of the events and it reaches a pre-defined threshold, the account system identifies the location of the terminal by identifying the common geolocation of the events. The identified geolocation is saved so that when a user then enters the location and transmits event data to the account system, the system can compare the geolocation of the event data to the saved geolocation to determine whether the computing device is located at the terminal. If the computing device is located at the terminal, the account system transmits offers or other content for display and use at the identified terminal.
Abstract:
A location of a network user computing device is determined relative to a location of a point of interest. If the user device is determined to be stationary, the user device is monitored for movement, the movement resulting in re-determining the location of the user device relative to a location of the point of interest. If the user device is determined to be moving, the velocity of the user device is matched with a predetermined velocity, and a preliminary estimated time of arrival to the point of interest is determined based on the predetermined velocity matched to the user device. At a later time that is based on a function of the preliminary estimated time of arrival, an estimated time of arrival to the point of interest is verified based on the predetermined velocity matched to the user device.
Abstract:
A user captures an image of a payment card via a user computing device camera. An optical character recognition system receives the payment card image from the user computing device. The system performs optical character recognition and visual object recognition algorithms on the payment card image to extract text and visual objects from the payment card image, which are used by the system to identify a payment card type. The system may categorize the payment card as an open-loop card or a closed-loop card, or as a credit card or a non-credit card. In an example embodiment, the system allows or prohibits extracted financial account information from the payment card to be saved in the digital wallet account based on the determined payment card category. In another example embodiment, the system transmits an advisement to the user based on the determined payment card category.
Abstract:
Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.