Abstract:
A neural network training device according to an exemplary aspect of the present invention includes: a memory that stores a set of instructions; and at least one central processing unit (CPU) configured to execute the set of instructions to: determine a regularization strength for each layer, based on an initialized network; and train a network, based on the initialized network and the determined regularization strength, wherein the at least one CPU is further configured to determine the regularization strength in such a way that a difference between magnitude of a parameter update amount calculated from a loss function and magnitude of a parameter update amount calculated from a regularization term falls within a predetermined range.
Abstract:
This prediction model preparation device is provided with: a calculation means which calculates, from a datum in which a sample and a label are associated with each other, an importance level according to the difference between a first possibility that an event influencing the sample occurs in a source domain and a second possibility that the event occurs in a target domain; and a preparation means which constructs prepares a prediction model relating to the target domain by calculating association between the sample and the label included in the datum to which the importance level is added.
Abstract:
The purpose of the present invention is to prevent a reduction in authentication accuracy caused by identity fraud. A score calculation unit compares each of a plurality of types of biological information, acquired as acquired biological information from a target person of identity verification, to the same type of registered biological information registered in advance. Based on the comparison, the score calculation unit calculates an authentication score that expresses the degree of similarity between the acquired biological information and the registered information, for each type of acquired biological information. For each type of the acquired biological information, a probability calculation unit calculates as an identity fraud probability using the calculated authentication score to. A determination unit determines whether the target person of identity verification is the registered person, and/or determines whether the target person of identity verification is fraudulently pretending to be the registered person.
Abstract:
The information processing apparatus (2000) of the example embodiment 1 includes an acquisition unit (2020), a modeling unit (2040), an output unit (2080). The acquisition unit (2020) acquires a plurality of trajectory data. The trajectory data represents a time-sequence of observed positions of an object. The modeling unit (2040) assigns one of groups for each trajectory data. The modeling unit (2040) generates a generative model for each group. The generative model represents trajectories assigned to the corresponding group by a common time-sequence of velocity transformations. The velocity transformation represents a transformation of velocity of the object from a previous time frame, and is represented using a set of motion primitives defined in common for all groups. The output unit (2060) outputs the generated generative models.
Abstract:
A neural network learning device 20 is equipped with: a determination module 22 that determines the size of a local region in learning information 200 which is to be learned by a neural network 21 containing multiple layers, said determination being made for each layer, on the basis of the structure of the neural network 21; and a control module 25 that, on the basis of size of the local region as determined by the determination module 22, extracts the local region from the learning information 200, and performs control such that the learning of the learning information represented by the extracted local region by the neural network 200 is carried out repeatedly while changing the size of the extracted local region, and thus, a reduction in the generalization performance of the neural network can be avoided even when there is little learning data.
Abstract:
A learning device according to the present invention includes, in semi-supervised learning using domain information: a memory; and a processor. The processor performs operations. The operations includes: including a first neural network outputting data after predetermined conversion by using first data including the domain information and second data not including the domain information, a second neural network outputting a result of predetermined processing by using data after the conversion, and a third neural network outputting a result of domain discrimination by using data after the conversion; calculating a first loss being a loss of the domain discrimination; calculating a second loss being an unsupervised loss; calculating a third loss in the predetermined processing; and modifying a parameter of each of the first neural network to the third neural network in such a way as to decrease the second loss and the third loss and increase the first loss.
Abstract:
A classifier learning apparatus (100) includes: an object acquisition unit (101) that acquires a set of reference vectors and assigned category information of the respective reference vectors as a processing object; a specifying unit (102) that specifies an internal nearest neighbor reference vector nearest to a sample vector among the reference vectors assigned to the same category as the sample vector and specifies an external nearest neighbor reference vector nearest to the sample vector among the reference vectors assigned to a category different from that of the sample vector; a calculation unit (103) that calculates an evaluation value of the processing object using a distance between the sample vector and a classification boundary formed by the internal nearest neighbor reference vector and the external nearest neighbor reference vector; and an updating unit (104) that updates an original set of reference vectors and original assigned category information with the processing object based on the evaluation value.