Abstract:
Disclosed are methods and devices, among which is a system that includes one or more pattern-recognition processors, such as in a pattern-recognition cluster. The pattern-recognition processors may be activated to perform a search of a data stream individually using a chip select or in parallel using a universal select signal. In this manner, the plurality of pattern-recognition processors may be enabled concurrently for synchronized processing of the data stream.
Abstract:
Apparatus for face recognition, the apparatus comprising: a face symmetry verifier, configured to verify symmetry of a face in at least one image, according to a predefined symmetry criterion, and a face identifier, associated with the face symmetry verifier, and configured to identify the face, provided the symmetry of the face is successfully verified.
Abstract:
A system and method for performing object recognition with a mobile computing device (110) and server (120) is disclosed. The mobile computing device (110), preferably a camera phone, is configured to capture (302) digital pictures or video, extract (304) visual features from the image data, and transmit (306) the visual features to a server via the cellular network or Internet, for example. Upon receipt, the server (120) compares the extracted features to the features of a plurality of known objects to identify (308) one or more items depicted in the image data. Depending on the item identified, the server may execute one or more predetermined actions including transmitting (312) product information to the mobile phone. The product information may specify the price, quantity, availability, location information for the identified item.
Abstract:
A multimodal biometric identification or authentication system (11) includes a plurality of biometric clients (13a-13d). Each of the biometric clients may include devices for capturing biometric images of a plurality of types. The system includes a router (15) in communication with the biometric clients. The router receives biometric images from, and returns biometric scores or results to, the biometric clients. The system includes a plurality of biometric matching engines (21a-21d) in communication with the router. Each biometric matching engine includes multiple biometric processors. Each biometric processor is adapted to process biometric data of a particular type. The biometric matching engines transmit and receive biometric data to and from the router.
Abstract:
Upon determining (201) a need for image recognition facilitation content, a corresponding process (200) first determines (202 and 203) whether adequate local resources are available. When true, those local resources are used (204) to facilitate the desired image recognition. When false, however, one or more remote resources are accessed (205) and supplemental image recognition facilitation content is received (206) and locally used (207) to effect the desired image recognition process. Local memory management can optionally comprise, if desired, deletion (208) of some (or all) locally stored image recognition facilitation content and/or storage (209) of the remotely sourced image recognition facilitation content.
Abstract:
Method and system for performing event detection and object tracking in image streams by installing in field, a set of image acquisition devices, where each device includes a local programmable processor for converting the acquired image stream that consist of one or more images, to a digital format, and a local encoder for generating features from the image stream. These features are parameters that are related to attributes of objects in the image stream. The encoder also transmits a feature stream, whenever the motion features exceed a corresponding threshold. Each image acquisition device is connected to a data network through a corresponding data communication channel. An image processing server that determines the threshold and processes the feature stream is also connected to the data network. Whenever the server receives features from a local encoder through its corresponding data communication channel and the data network, the server provides indications regarding events in the image streams by processing the feature stream and transmitting these indications to an operator.
Abstract:
A system for handling checks is provided that includes a sorter operable to retrieve MICR data from a plurality of checks. An emulator is coupled to the sorter. The emulator is operable to access the MICR data, to generate a process buffer based on the MICR data, and to generate a plurality of feature instructions for each check based on the process buffer. A communication engine is coupled betweeen the emulator and a check processing system. The communication engine is operable to communicate between the emulator and the check processing system in real-time. The check processing system is operable to receive the process buffer from the emulator through the communication engine. The emulator is further operable to communicate the feature instructions to the sorter. The sorter is further operable to process the checks based on the feature instructions.
Abstract:
The invention presented in this disclosure is related to augmented reality systems. The main purpose of the augmented reality system of this invention is to provide information to user which helps to control, maintain, repair and/or accomplish other task with the different technical device. The remote information providing means using augmented reality is far more effective to compare with ordinary voice- only information or studying instructions, manuals in order to know technical device. The main difference from available augmented reality systems dedicated for similar purpose is that the system of this invention is capable to generate new data, new scripts, new instructions based on the available data and data provided by users.
Abstract:
An inference system performs inference, such as object recognition, based on sensory inputs generated by sensors and control information associated with the sensory inputs. The sensory inputs describe one or more features of the objects. The control information describes movement of the sensors or known locations of the sensors relative to a reference point. For a particular object, an inference system learns a set of object-location representations of the object. An object-location representation is a unique characterization of an object-centric location relative to the particular object. The inference system also learns a set of feature-location representations associated with the object-location representation that indicate presence of features at the corresponding object-location pair. The inference system can perform inference on an unknown object by identifying candidate object-location representations consistent with feature-location representations observed from the sensory input data and control information.