Abstract:
A method to reinforce deep neural network learning capacity to classify rare cases, which comprises the steps of training a first deep neural network (DNN-A) used to classify generic cases of original data (Data-A) into specified labels (Label-A); localizing discriminative class-specific features within the original data processed through DNN-A and mapping the discriminative class-specific features as spatial-probabilistic labels (Label-B); training a second deep neural network (DNN-C) used to classify rare cases of the original data into the spatial-probabilistic labels; and training a combined deep neural network (DNN-D) used to classify both generic and rare cases of the original data into primary combined specified and spatial-probabilistic labels (Label-A+B*).
Abstract translation:一种增强深度神经网络学习能力以对罕见情况进行分类的方法,其包括以下步骤:训练用于分类原始数据的一般情况的第一深度神经网络(DNN-A)(数据-A )到指定的标签(Label-A)中; 将通过DNN-A处理的原始数据内的区分类别特征本地化并将区分类别特征映射为空间概率标签(标签-B); 训练用于将原始数据的罕见情况分类为空间概率标签的第二深度神经网络(DNN-C) 并且训练用于将原始数据的一般情况和罕见情况分类为主要组合指定和空间概率标签(Label-A + B *)的组合深度神经网络(DNN-D)。 p>
Abstract:
An integrated inertial motion capture system includes external position aiding such as optical tracking. The integrated system addresses and corrects misalignment between the inertial (relative) and positioning reference (external) frames. This is especially beneficial when it is not feasible to rely on magnetometers to reference the heading of the inertial system. This alignment allows effective tracking of yaw in a common reference frame for all inertial sensing units, even those with no associated positional measurements. The disclosed systems and methods also enable an association to be formed between the inertial sensing units and positional measurements.
Abstract:
A computerized system and method for analyzing a digital photograph and identifying skin areas, the method comprises three main steps. At first, transforming the digital photograph to a plurality of color bases candidates, and choosing one candidate based on highest score plane. Then, performing a segmentation of the digital photograph to a plurality of objects based on maximal separations between different object. Lastly, classifying the objects to one or more of the following categories: skin, background or underwear.
Abstract:
An example system is provided in according with one implementation of the present disclosure. The system includes a data engine to receive data related to an object from at least one source in communication with a first leaf node from a plurality of leaf nodes. The system also includes a processing engine to extract data features from the data related to the object to create a visual signature of the object and to transform the data features to a coded signature of the object. The system also includes a re-identification analysis engine to perform a multi-level re-identification of the object, wherein the visual signature of the object is used in a first level of re-identification and the coded signature of the object is used in at least one different level of re-identification.
Abstract:
A surround view system (1) comprising at least one camera pair formed by two cameras (2-i) with overlapping fields of view (FOV) adapted to generate camera images with an overlapping area (OA), and a processing unit (3) configured to compute surround view images including overlapping areas (OAs) with respect to each camera (2-i) and to extract features from the computed surround view images resulting in binary images (BI) for each overlapping area (OA), wherein priority in visualisation is given to the camera image (CI) of the overlapping area (OA), generated by the camera (2-i) of the respective camera pair where a calculated sum of binary values within the resulting binary image (BI) is highest.
Abstract:
A method includes receiving a user input (e. g., a one-touch user input), performing segmentation to generate multiple candidate regions of interest (ROIs) in response to the user input, and performing ROI fusion to generate a final ROI (e. g., for a computer vision application). In some cases, the segmentation may include motion-based segmentation, color-based segmentation, or a combination thereof. Further, in some cases, the ROI fusion may include intraframe (or spatial) ROI fusion, temporal ROI fusion, or a combination thereof.
Abstract:
The subject disclosure presents systems and computer-implemented methods for assessing a risk of cancer recurrence in a patient based on a holistic integration of large amounts of prognostic information for said patient into a single comparative prognostic dataset. A risk classification system may be trained using the large amounts of information from a cohort of training slides from several patients, along with survival data for said patients. For example, a machine-learning-based binary classifier in the risk classification system may be trained using a set of granular image features computed from a plurality of slides corresponding to several cancer patients whose survival information is known and input into the system. The trained classifier may be used to classify image features from one or more test patients into a low-risk or high-risk group.
Abstract:
A computer diagnostic system and related method are disclosed for automatically classifying tissue types in an original tissue image captured by an imaging device based on texture analysis. The system receives and divides the tissue image into multiple smaller tissue block images. In one non-limiting embodiment, a combination of local binary pattern (LBP) and average local binary pattern (ALBP) extractions are performed on each tissue block. Other texture analysis methods may be used. The extractions generate a set of LBP and ALBP features for each block which are used to classify its tissue type. The classification results are visually displayed in a digitally enhanced map of the original tissue image. In one embodiment, a tissue type of interest is displayed in the original tissue image. In another or the same embodiment, the map displays each of the different tissue types present in the original tissue image.
Abstract:
Le dispositif de signalisation en temps réel d'au moins un objet à un module (60) de navigation de véhicule comprend un premier capteur (1 ) agencé pour produire des données de premier capteur comportant une première position et une première vitesse capturées d'objet par rapport au véhicule. De plus, le dispositif comprend : au moins un deuxième capteur (2, 3, 4) agencé pour produire des données de deuxième capteur comportant une deuxième position et une deuxième vitesse capturées d'objet par rapport au véhicule; un module (20) de synchronisation agencé pour produire des données synchronisées comportant une première position synchronisée à partir de la première position et de la première vitesse capturées et au moins une deuxième position synchronisée à partir de la deuxième position et de la deuxième vitesse capturées; un module (30) de fusion agencé pour produire des données fusionnées comportant une position fusionnée à partir de la première position synchronisée et de la deuxième position synchronisée de façon à signaler ledit au moins un objet au module (60) de navigation en lui communiquant tout ou partie des données fusionnées.
Abstract:
A device and a method facilitating generation of one or more intuitive gesture sets for the interpretation of a specific purpose are disclosed. Data is captured in a scalar and a vector form which is further fused and stored. The intuitive gesture sets generated after the fusion are further used by one or more components/devices/modules for one or more specific purpose. Also incorporated is a system for playing a game. The system receives one or more actions in a scalar and a vector form from one or more user in order to map the action with at least one prestored gesture to identify a user in control amongst a plurality of users and interpret the action of user for playing the game. In accordance with the interpretation, an act is generated by the one or more component of the system for playing the game.