Abstract:
An image identification apparatus includes following components. A first generative model creation unit extracts feature information from identification-target images which belong to an identification-target category, and creates a first generative model on the basis of the feature information. A classification unit applies the first generative model to each not-identification-target image which belongs to a not-identification-target category so as to determine a probability of the not-identification-target image belonging to the identification-target category, and classifies the not-identification-target image to a corresponding one of not-identification-target groups in accordance with the probability. A second generative model creation unit that extracts feature information from not-identification-target images which belong to a corresponding one of the not-identification-target groups, and creates a second generative model of each not-identification-target group on the basis of the corresponding feature information.
Abstract:
An information processing apparatus includes an acquisition unit that acquires, from plural pieces of first information viewed by a user, content information being information of contents described in the first information, an extraction unit that extracts a location in which the user has performed editing, from second information as a target of the user working, and a specifying unit that specifies the first information viewed during a period in which the user performs editing at the location or viewed before and after the editing among the plural pieces of first information from the content information.
Abstract:
A non-transitory computer readable medium stores an information presenting program causing a computer to execute a process. The process includes: specifying a type of behavior of a working person within a structure; associating a customer within the structure with the working person; obtaining sales information concerning the customer; and registering the customer, the specified type of behavior of the working person, and the obtained sales information as information concerning the working person and presenting requested information from among the registered information.
Abstract:
An image processing apparatus includes a first acquiring unit that acquires an image to be processed; a setting unit that sets multiple partial image areas in the image to be processed; a second acquiring unit that acquires a first classification result indicating a possibility that an object of a specific kind is included in each of the multiple partial image areas; and a generating unit that generates a second classification result indicating a possibility that the object of the specific kind is included in the image to be processed on the basis of the first classification result of each of the multiple partial image areas.
Abstract:
There is provided a non-transitory computer-readable medium storing a program causing a computer to execute a process. The process includes: acquiring posted information items, each of the posted information items including at least either of a text information item and an image information item; generating text information items including text items in such a manner that image information items are removed from the posted information items, and classifying the text items into first categories; generating image information items including images in such a manner that text information items are removed from the posted information items, and classifying the images into second categories; associating the classified text items and the classified images with each other on the basis of the first and second categories to obtain results; and outputting the text items and the images for each of the results.
Abstract:
A person identification device includes: an extractor that, from an image containing a person, extracts a first characteristic quantity related to a face of the person and a second characteristic quantity related to a body of the person; and an identifying unit that identifies the person based on a third characteristic quantity calculated by assigning a weight to each of the first characteristic quantity and the second characteristic quantity extracted.
Abstract:
An image processing apparatus includes a reception section, an image extraction section, a forming section, and a comparison section. The reception section receives a video. The image extraction section extracts target object images from multiple frames that constitute the video received by the reception section. The forming section forms multiple target object images among the target object images extracted by the image extraction section into one unit, the multiple target object images being temporally apart from each other. The comparison section makes a comparison on the basis of the unit formed by the forming section.
Abstract:
A data classification device includes an estimation unit that estimates, for each of one or more classes provided for learning data pieces in a feature-amount-data space that includes multiple learning data pieces, probability densities of learning data pieces belonging to the class and learning data pieces not belonging to the class around a judgment target data piece in the feature-amount-data space, a calculation unit that calculates, based on the probability densities, an index indicating how much the judgment target data piece is likely to belong to the class, and a judgment unit that judges which class the judgment target data piece belongs to by using the index. Based on distribution of positive data pieces belonging to the class and negative data pieces not belonging to the class around the judgment target data piece, the estimation unit determines a size of a region used for the estimation.
Abstract:
A device comprises: a nucleus-candidate-region extracting section that extracts, from a captured image obtained by image-capturing a sample piece including a target cell having a nucleus, a nucleus candidate region corresponding to the nucleus; a basic-probability-information acquiring section that acquires, for each of a plurality of determination subject regions determined on the basis of the nucleus candidate region extracted by the nucleus-candidate-region extracting section, basic probability information indicating probability that an image in the determination subject region is an image of the target cell, on the basis of a feature amount of the image of the determination subject region; and a probability-information calculating section that calculates probability information indicating probability that an image in a display subject region corresponding to the nucleus candidate region is the image of the target cell, on the basis of the basic probability information acquired for each of the plurality of determination subject regions.
Abstract:
An image-feature-value calculating unit extracts an image feature value of an image of a cell candidate area. An NRBC discriminating unit uses a pre-trained discriminator to identify whether or not a target cell is shown in the cell candidate area, on the basis of the image feature value of the image of the cell candidate area. When the cell candidate area is identified as an area in which a target cell is shown, a discrimination result display unit displays the image of the cell candidate area. When the cell candidate area is identified as an area in which a target cell is shown, a discriminator training unit trains the discriminator by using the image feature value of the image of the cell candidate area as a training sample on the basis of a user input about whether or not a target cell is shown in the cell candidate area.