摘要:
With respect to the model selection issue of a mixture model, the present invention performs high-speed model selection under an appropriate standard regarding the number of model candidates which exponentially increases as the number and the types to be mixed increase. A mixture model estimation device comprises: a data input unit to which data of a mixture model to be estimated, candidate values of the number of mixtures which are required for estimating the mixture model of the data, and types of components configuring the mixture model and parameters thereof, are input; a processing unit which sets the number of mixtures from the candidate values, calculates, with respect to the set number of mixtures, a variation probability of a hidden variable for a random variable which becomes a target for mixture model estimation of the data, and estimates the optimal mixture model by optimizing the types of the components and the parameters therefor using the calculated variation probability of the hidden variable so that the lower bound of the posterior probabilities of the model separated for each component of the mixture model can be maximized; and a model estimation result output unit which outputs the model estimation result obtained by the processing unit.
摘要:
An abnormality score calculating means calculates abnormality scores which are information indicating abnormality of medical data, based on specificity of the medical data. An abnormality score vector generating means creates at least one or more abnormality score vectors which are information obtained by integrating the abnormality scores. Further, a side effect detecting means which decides a likelihood of a side effect indicated by the abnormality score vector, based on a predetermined rule, and detects an abnormality score vector the likelihood of which is set in advance and which satisfies conditions as information indicating the side effect.
摘要:
To provide a discriminant model learning device capable of efficiently learning a discriminant model on which domain knowledge indicating user's knowledge or analysis intention for a model is reflected while keeping fitting to data. A query candidate storage means 81 stores candidates of a query as a model to be given with domain knowledge indicating a user's intention. A regularization function generation means 82 generates a regularization function indicating compatibility with domain knowledge based on the domain knowledge to be given to the query candidates. A model learning means 83 learns a discriminant model by optimizing a function defined by a loss function and the regularization function predefined per discriminant model.
摘要:
To provide a latent variable model estimation apparatus capable of implementing the model selection at high speed even if the number of model candidates increases exponentially as the latent state number and the kind of the observation probability increase. A variational probability calculating unit 71 calculates a variational probability by maximizing a reference value that is defined as a lower bound of an approximation amount, in which Laplace approximation of a marginalized log likelihood function is performed with respect to an estimator for a complete variable. A model estimation unit 72 estimates an optimum latent variable model by estimating the kind and a parameter of the observation probability with respect to each latent state. A convergence determination unit 73 determines whether a reference value, which is used by the variational probability calculating unit 71 to calculate the variational probability, converges.
摘要:
With respect to the model selection issue of a mixture model, the present invention performs high-speed model selection under an appropriate standard regarding the number of model candidates which exponentially increases as the number and the types to be mixed increase. A mixture model estimation device comprises: a data input unit to which data of a mixture model to be estimated, candidate values of the number of mixtures which are required for estimating the mixture model of the data, and types of components configuring the mixture model and parameters thereof, are input; a processing unit which sets the number of mixtures from the candidate values, calculates, with respect to the set number of mixtures, a variation probability of a hidden variable for a random variable which becomes a target for mixture model estimation of the data, and estimates the optimal mixture model by optimizing the types of the components and the parameters therefor using the calculated variation probability of the hidden variable so that the lower bound of the posterior probabilities of the model separated for each component of the mixture model can be maximized; and a model estimation result output unit which outputs the model estimation result obtained by the processing unit.
摘要:
In order to learn an appropriate probability model in a probability model learning problem where a first issue and a second issue manifest concurrently by solving the two at the same time, provided is a probability model estimation device for obtaining a probability model estimation result from first to T-th (T≧2) training data and test data. The probability model estimation device includes: first to T-th training data distribution estimation processing units for obtaining first to T-th training data marginal distributions with respect to the first to the T-th training models, respectively; a test data distribution estimation processing unit for obtaining a test data marginal distribution with respect to the test data; first to T-th density ratio calculation processing units for calculating first to T-th density ratios, which are ratios of the test data marginal distribution to the first to the T-th training data marginal distributions, respectively; an objective function generation processing unit for generating an objective function that is used to estimate a probability model from the first to the T-th density ratios; and a probability model estimation processing unit for estimating the probability model by minimizing the objective function.
摘要:
An abnormality information creating means creates at least one or more abnormality information which is information indicating abnormality of each data based on specificity of medical data. A side effect detecting means decides a likelihood of a side effect indicated by the abnormality information according to a predetermined rule, and detects abnormality information the likelihood of which satisfies conditions set in advance as information indicating the side effect. When receiving an input of information used to create the abnormality information as the feedback information, the abnormality information creating means creates the abnormality information based on the information. Further, when receiving as the feedback information an input of the information used to detect the side effect, the side effect detecting means detects the side effect based on the information.
摘要:
An abnormality score calculating means calculates abnormality scores which are information indicating abnormality of medical data, based on specificity of the medical data. An abnormality score vector generating means creates at least one or more abnormality score vectors which are information obtained by integrating the abnormality scores. Further, a side effect detecting means which decides a likelihood of a side effect indicated by the abnormality score vector, based on a predetermined rule, and detects an abnormality score vector the likelihood of which is set in advance and which satisfies conditions as information indicating the side effect.
摘要:
Kernel functions, the number of which is set in advance, are linearly coupled to generate the most suitable Kernel function for a data classification. An element Kernel generating unit 102 generates a plurality of element Kernel functions K1-Kp by using a plurality of distance functions (distance scales) d1-dp prepared in advance.A Kernel optimizing unit 103 generates an integrated Kernel function K with which the element Kernel functions K1-Kp are linearly coupled, determines coupling coefficients to optimally separate the teacher data z, and optimizes the integrated Kernel function K.A Kernel component display unit 104 displays each of the element Kernel functions K1-Kp, its coupling coefficient, and a distance scale corresponding to each of the element kernel functions on a display device 150.
摘要:
Kernel functions, the number of which is set in advance, are linearly coupled to generate the most suitable Kernel function for a data classification. An element Kernel generating unit 102 generates a plurality of element Kernel functions K1-Kp by using a plurality of distance functions (distance scales) d1-dp prepared in advance.A Kernel optimizing unit 103 generates an integrated Kernel function K with which the element Kernel functions K1-Kp are linearly coupled, determines coupling coefficients to optimally separate the teacher data z, and optimizes the integrated Kernel function K.A Kernel component display unit 104 displays each of the element Kernel functions K1-Kp, its coupling coefficient, and a distance scale corresponding to each of the element kernel functions on a display device 150.