Abstract:
This document describes apparatuses and techniques for radar-enabled sensor fusion. In some aspects, a radar field is provided and reflection signals that correspond to a target in the radar field are received. The reflection signals are transformed to provide radar data, from which a radar feature indicating a physical characteristic of the target is extracted. Based on the radar features, a sensor is activated to provide supplemental sensor data associated with the physical characteristic. The radar feature is then augmented with the supplemental sensor data to enhance the radar feature, such as by increasing an accuracy or resolution of the radar feature. By so doing, performance of sensor-based applications, which rely on the enhanced radar features, can be improved.
Abstract:
The present invention relates to an image analysis method for providing information for supporting illness development prediction regarding a neoplasm in a human or animal body. The method includes receiving for the neoplasm first and second image data at a first and second moment in time, and deriving for a plurality of image features a first and a second image feature parameter value from the first and second image data. These feature parameter values being a quantitative representation of a respective image feature. Further, calculating an image feature difference value by calculating a difference between the first and second image feature parameter value, and based on a prediction model deriving a predictive value associated with the neoplasm for supporting treatment thereof. The prediction model includes a plurality of multiplier values associated with image features. For calculating the predictive value the method includes multiplying each image feature difference value with its associated multiplier value and combining the multiplied image feature difference values.
Abstract:
사용자 인식 방법 및 장치가 개시된다. 사용자 인식 장치는 입력 데이터로부터 현재 사용자의 사용자 특징을 추출하고, 추출된 사용자 특징에 기초하여 현재 사용자의 식별자를 추정할 수 있다. 사용자 인식 장치는 무감독 학습을 통해 별도의 절차 없이도 사용자 인식을 수행할 수 있고, 사용자 데이터를 지속적으로 갱신할 수 있다.
Abstract:
Die Erfindung betrifft eine Vorrichtung zum Identifizieren einer Person, umfassend eine Realbildkamera (10) zum Erfassen eines Reflexionsbilds eines Gesichts (14) der Person im sichtbaren Spektralbereich, eine Wärmebildkamera (12) zum Erfassen eines Infrarot-Emissionsbilds des Gesichts der Person, eine Realbildauswertung (16), und eine Infrarotbildauswertung (18). Die Realbildkamera ist dazu eingerichtet, das Reflexionsbild in Form von Reflexionsbilddaten (20) an die Realbildauswertung zu übermitteln. Die Wärmebildkamera ist dazu eingerichtet, das Infrarot-Emissionsbild in Form von Infrarotbilddaten (22) an die Infrarotbildauswertung zu übermitteln. Die Realbildauswertung ist dazu eingerichtet, aus den Reflexionsbilddaten erste biometrische Merkmale (24) des Gesichts zu ermitteln, ein Referenzbild (30) mit einer Personen-Identifikation (36) zu empfangen, aus diesem Referenzbild zweite biometrische Merkmale (32) zu ermitteln, und, wenn die ersten und zweiten biometrischen Merkmale in einem vorbestimmten Maß übereinstimmen, die Personen-Identifikation einer Ausgabesteuerung (42) bereitzustellen. Die Infrarotbildauswertung ist dazu eingerichtet, eine Temperaturverteilung (34) aus den Infrarotbilddaten zu ermitteln und zu bestimmen, ob die Temperaturverteilung zumindest teilweise einer Temperaturverteilung im Gesicht einer lebendigen Person zumindest nahekommt, und wenn dies der Fall ist, die Ausgabesteuerung anzuweisen, die Personen-Identifikation auszugeben.
Abstract:
Systems and methods are provided for generating a model relating parameters generated via a first molecular imaging modality to parameters generated via a second molecular imaging modality. First and second feature extractors extract, from images of a region of interest obtained via respective first and second molecular imaging modalities, respective sets of parameters for respective first and second sets of locations. A mapping component associates respective locations of the first and second sets of locations according to their spatial relationship within the region of interest to produce a training set. Each example in the training set comprises a set of parameters associated with a location in the first set of locations and a set of parameters associated with a location in the second set. A modeling component generates a predictive model relating the parameters associated with the first modality with at least one parameter associated with the second modality.
Abstract:
A method for automatically classifying tissue includes obtaining training data including a plurality of microscope images that have been manually classified. A plurality of features is calculated from the training data, each of which is a texture feature, a network feature, or a morphometric feature. A subset of features is selected from the calculated subset of features based on both maximum relevance and minimum redundancy. A classifier is trained based on the selected subset of features and the manual classifications. A diagnostic microscope image is classified in a computer-aided diagnostic system using the trained classifier.
Abstract:
Classification of a potential target is accomplished by receiving image information, detecting a potential target within the image information and determining a plurality of features forming a feature set associated with the potential target. The location of the potential target is compared with a detection database to determine if it is close to an element in the detection database. If not, a single-pass classifier receives a potential target's feature set, classifies the potential target, and transmits the location, feature set and classification to the detection database. If it is close, a fused multi-pass feature determiner determines fused multi-pass features of the potential target and a multi-pass classifier receives the potential target's feature set and fused multi-pass features, classifies the potential target, and transmits its location, feature set, fused multi-pass features and classification to the detection database.
Abstract:
A face recognition method for working with two or more collections of facial images is provided. A representation framework is determined for a first collection of facial images including at least principle component analysis (PCA) features. A representation of said first collection is stored using the representation framework. A modified representation framework is determined based on statistical properties of original facial image samples of a second collection of facial images and the stored representation of the first collection. The first and second collections are combined without using original facial image samples. A representation of the combined image collection (super-collection) is stored using the modified representation framework. A representation of a current facial image, determined in terms of the modified representation framework, is compared with one or more representations of facial images of the combined collection. Based on the comparing, it is determined which, if any, of the facial images within the combined collection matches the current facial image.