Abstract:
A method for detecting properties of sample tubes is provided that includes extracting image patches substantially centered on a tube slot of a tray or a tube top in a slot. For each image patch, the method may include assigning a first location group defining whether the image patch is an image center, a comer of an image or a middle edge of an image, selecting a trained classifier based on the first location group and determining whether each tube slot contains a tube. The method may also include assigning a second location group defining whether the image patch is from an image center, a left comer of the image, a right comer of the image, a left middle of the image; a center middle of the image or a right middle of the image, selecting a trained classifier based on the second location group and determining a tube property.
Abstract:
Optimizing multi-class image classification by leveraging patch-based features extracted from weakly supervised images to train classifiers is described. A corpus of images associated with a set of labels may be received. One or more patches may be extracted from individual images in the corpus. Patch-based features may be extracted from the one or more patches and patch representations may be extracted from individual patches of the one or more patches. The patches may be arranged into clusters based at least in part on the patch-based features. At least some of the individual patches may be removed from individual clusters based at least in part on determined similarity values that are representative of similarity between the individual patches. The system may train classifiers based in part on patch-based features extracted from patches in the refined clusters. The classifiers may be used to accurately and efficiently classify new images.
Abstract:
A method for classifier generation includes a step of obtaining data for classification of a multitude of samples, the data for each of the samples consisting of a multitude of physical measurement feature values and a class label. Individual mini-classifiers are generated using sets of features from the samples. The performance of the mini-classifiers is tested, and those that meet a performance threshold are retained. A master classifier is generated by conducting a regularized ensemble training of the retained/filtered set of mini- classifiers to the classification labels for the samples, e.g., by randomly selecting a small fraction of the filtered mini-classifiers (drop out regularization) and conducting logistical training on such selected mini-classifiers. The set of samples are randomly separated into a test set and a training set. The steps of generating the mini-classifiers, filtering and generating a master classifier are repeated for different realizations of the separation of the set of samples into test and training sets, thereby generating a plurality of master classifiers. A final classifier is defined from one or a combination of more than one of the master classifiers.
Abstract:
A mobile device having the capability of performing real-time location recognition with assistance from a server is provided. The approximate geophysical location of the mobile device is uploaded to the server. Based on the mobile device's approximate geophysical location, the server responds by sending the mobile device a message comprising a classifier and a set of feature descriptors. This can occur before an image is captured for visual querying. The classifier and feature descriptors are computed during an offline training stage using techniques to minimize computation at query time. The classifier and feature descriptors are used to perform visual recognition in real-time by performing the classification on the mobile device itself.
Abstract:
One or more facial recognition categories are assigned to a face region detected in an input image (24). Each of the facial recognition categories is associated with a respective set of one or more different feature extraction modules (66) and a respective set of one or more different facial recognition matching modules (76). For each of the facial recognition categories assigned to the face region, the input image (24) is processed with each of the feature extraction modules (66) associated with the facial recognition category to produce a respective facial region descriptor vector of facial region descriptor values characterizing the face region. A recognition result (96) between the face region and a reference face image (28) is determined based on application of the one or more facial recognition matching modules (76) associated with the facial recognition categories assigned to the face region to the facial region descriptor vectors produced for the face region detected in the input image (24).
Abstract:
A mobile device configured to perform gesture recognition for a vehicle information and/or entertainment system comprises a depth camera; an orientation sensor; and a processor configured to detect one or more gestures from images captured by the depth camera according to a gesture detection algorithm; in which the processor is configured to vary the gesture detection algorithm in dependence upon an orientation of the mobile device detected by the orientation sensor.
Abstract:
Systems and method for object characterization include generating a classifier that defines a decision hyperplane separating a first classification region of a virtual feature space from a second classification region of the virtual feature space. Input information is provided to the classifier, and a number of classifications are received from the classifier. A distribution of the classifications is determined and used to generate a prediction.
Abstract:
A human biological characteristic file corresponding to a particular identity is received and used as a base file. A characteristic code to be authenticated is obtained according to a human biological characteristic of a person who requests identity authentication when an identity authentication request corresponding to the particular identity is received. A base characteristic code is collected from a base file. A collecting algorithm applied for collecting the base characteristic code is the same as or matches an algorithm applied for obtaining the characteristic code. The present techniques determine whether the base characteristic code and the characteristic code correspond to a same human biological characteristic. If a result is positive, the identity authentication request is verified. The present techniques implement communication between different terminal devices of different manufacturers and effectively improve user experiences, thereby efficiently and conveniently implementing remote identity authentication.