Abstract:
An electronic device includes a camera module, a display, and a processor. The processor is configured to display a preview image including one or more objects using the camera module, to display a first user interface corresponding to the one or more objects or a second user interface, in the display, to receive an input to select the first user interface or the second user interface, to obtain a first image in a first scheme using the camera module if the first user interface is selected, to obtain a second image using the camera module in a second scheme different from the first scheme if the second user interface is selected, and to provide information associated with the one or more objects using the first image and/or the second image, which is obtained based at least on the input.
Abstract:
The present disclosure provides for a method of machine representation and tracking of contract terms over the lifetime of a contract including a step of defining an object model having object model components. Object model components are associated with other object model components where the object model components have object model component types. Further, words of object model components are evaluated to identify whether the words contain one or more core attributes pertaining to details of the contract terms. From the object model components, and the terms they contain, prevailing terms of the contract are evaluated, stored and updated as changes are made to the object model components.
Abstract:
The present disclosure is directed to a method for optically recognizing a table and converting that recognized table to a digitized format. In particular, the present disclosure relates to a method of optically recognizing and identifying a table generally, individual cells within the table, the data embedded within each cell, as well as the original table format, including shading, cell borders, colors, and effects. Accordingly, such digitization of an optically recognized table, in whole or in part, as printed on a document or other media allows users to easily and quickly capture information as originally arranged without having to manually re-create a table and enter data into the re-created table.
Abstract:
A method for cropping photos images captured by a user from an image of a page of a photo album is described. Corners in the page image are detected using corner detection algorithm or by detecting intersections of line-segments (and their extensions) in the image using edge, corner, or line detection techniques. Pairs of the detected corners are used to define all potential quads, which are then are qualified according to various criteria. A correlation matrix is generated for each potential pair of the qualified quads, and candidate quads are selected based on the Eigenvector of the correlation matrix. The content of the selected quads is checked using a salience map that may be based on a trained neuron network, and the resulting photos images are extracted as individual files for further handling or manipulation by the user.
Abstract:
A method of generating a text line classifier including generating text line samples by use of a present terminal system font reservoir. The method also includes extracting features from the text line samples and pre-stored marked-up samples. The method further includes training models by use of the extracted features to generate a text line classifier for recognizing text regions. With the system font reservoir being utilized for generating text line samples, the generated text line classifiers can target different scenes or different requirements for text region recognition with a high degree of applicability and wide application in addition to ease of implementation. Together with the combinational use of the marked up samples for extracting features from the text line samples, the generated text line classifiers provide for enhanced classification efficiency and accuracy.
Abstract:
Die Erfindung betrifft ein Mobilgerät (100) zum Erfassen eines Textbereiches auf einem Identifikationsdokument, wobei der Textbereich eine Mehrzahl von Textzeichen in einer vorbestimmten Anordnung gemäß einem vorbestimmten Anordnungsmaß aufweist, mit einer Bildkamera (101), welche ausgebildet ist, ein Bild des Identifikationsdokumentes zu erfassen, um ein Dokumentenbild zu erhalten, und einem Prozessor (103), welcher ausgebildet ist, das Dokumentenbild zu segmentieren, um eine Mehrzahl von Bildsegmenten zu erhalten, eine Mehrzahl von Textzeichenbildsegmenten aus der Mehrzahl von Bildsegmenten auszuwählen, wobei die Textzeichenbildsegmente jeweils ein Textzeichen repräsentieren, eine Mehrzahl von Textzeichengruppen auf Basis der Mehrzahl von Textzeichenbildsegmenten zu bestimmen, wobei die Textzeichengruppen jeweils eine Folge von Textzeichenbildsegmenten umfassen, und eine Mehrzahl von Anordnungsmaßen der Mehrzahl von Textzeichengruppen mit dem vorbestimmten Anordnungsmaß zu vergleichen, um den Textbereich auf dem Identifikationsdokument zu erfassen.
Abstract:
A mobile computing device-implemented method of imaging an object to read information, includes capturing, by running an image capturing thread, a plurality of raw images of the object by an image capturing component of the mobile computing device and placing in an image queue in a first memory location; processing, by running an image processing thread, one or more raw images to extract one or more potential machine readable zone (MRZ) candidates and placing in a MRZ candidate queue in a second memory location; analyzing, by running an image analysis thread, an MRZ candidate to detect an MRZ and placing in an MRZ queue in a third memory location; and creating a composite MRZ if a timer has expired, or the MRZ queue has reached a predetermined threshold.
Abstract:
Technologies are described herein for interpreting character arrangements. An image including an arrangement of characters may be received or captured by a computing device. Techniques described herein generate data representative of the characters. Characteristics and other information interpreted from the image may be processed to determine a data type. The data representative of the characters may be arranged into a data structure based on the data type, an arrangement type and/or other information interpreted from the image. The data type may indicate one or more attributes of the arranged data such as a format, font, date, language, or currency. The data type may also indicate how data is used in a process, equation or calculation. In addition, the data type may identify an anchor that may be used to merge data generated from the image with other data generated from another image.
Abstract:
A method of recoding a photograph for use in a personal photo identification document such as a passport includes using a digital image capture system, including a digital camera, a computer processor, and memory to store specifications and requirements for a photo print in order to be compliant for use in a user selected photo ID such as a passport for a selected country or jurisdiction, using the digital image capture system to capture a facial image, using facial image processing techniques to provide automatic detection of a face and facial feature points on the facial image, processing the facial image and generating a visual indication of compliance, and when compliant, generating the photograph based on the compliant facial image.
Abstract:
A method includes receiving data representing an image captured of an object disposed on a surface in the presence of illumination by a flash light. The technique includes processing the data to identify an object type associated with the object and further processing the data based at least in part on the identified object type.