Abstract:
A feature amount acquisition device (100) includes an activation level calculator (12) deriving, as an activation level, a level at which, in a CNN classifier (11) including a plurality of layers and configured to, by processing input data based on image data of an input image capturing a first target and a second target around the first target in the layers, output a classification result of the first target, an unit in a layer among the layers influences a classification result and a feature amount acquirer (14) acquiring, based on the derived activation level and the image data of the input image, a feature amount of the input image so that a feature amount of a low activation level image region that is a region in the input image corresponding to a second unit having a lower activation level than a first unit is smaller than a feature amount of a high activation level image region that is a region in the input image corresponding to the first unit.
Abstract:
A multi-class identifier identifies a kind of an imager, and identifies in detail with respect to a specified kind of a group. The multi-class identifier includes: an identification fault counter providing the image for test that includes any of class labels to the kind identifiers so that the kind identifiers individually identify the kind of the provided image, and counting, for a combination of arbitrary number of kinds among the plurality of kinds, the number of times of incorrect determination in the arbitrary number of kinds that belongs to the combination; a grouping processor, for a group of the combination for which count result is equal to or greater than a predetermined threshold, adding a group label corresponding to the group to the image for learning that includes the class label corresponding to any of the arbitrary number of kinds that belongs to the group.
Abstract:
A state estimation apparatus includes: at least one processor; and a memory configured to store a program executable by the at least one processor; wherein the at least one processor is configured to: acquire a biological signal of a subject, in a certain period in which the biological signal is being acquired, set as a plurality of extraction time windows a plurality of time windows having mutually different time lengths, extract a feature value of the biological signal in each of the plurality time windows, and estimate a state of the subject based on the extracted feature value.
Abstract:
A similar image display control apparatus includes a processor configured to acquire similar images obtained as a result of a similar image search with respect to a query image, set categories into which the acquired similar images are to be classified, determine, in a space having a prescribed number of dimensions that is no less than two, coordinates indicating a position of each category region in accordance with attributes of types equal in number to the number of dimensions, the category region being a region indicating one of the set categories, classify the acquired similar images into the set categories, and place the similar images, classified into each of the categories, within the category region positioned in a position as indicated by the determined coordinates and cause a display to display the placed similar images.
Abstract:
An image acquisition unit of a machine learning device acquires n learning images assigned with labels to be used for categorization (n is a natural number larger than or equal to 2). A feature vector acquisition unit acquires a feature vector representing a feature from each of the n learning images. A vector conversion unit converts the feature vector for each of the n learning images to a similarity feature vector based on a similarity degree between the learning images. A classification condition learning unit learns a classification condition for categorizing the n learning images, based on the similarity feature vector converted by the vector conversion unit and the label assigned to each of the n learning images. A classification unit categorizes unlabeled testing images in accordance with the classification condition learned by the classification condition learning unit.
Abstract:
Provided are a diagnosis assisting device, an imaging processing method in the diagnosis assisting device, and a non-transitory storage medium having stored therein a program that facilitate a grasp of a difference in an diseased area to perform a highly precise diagnosis assistance. According to an image processing method in a diagnosis assisting device that diagnoses lesions from a picked-up image, a reference image corresponding to a known first picked-up image relating to lesions is registered in a database, and when a diagnosis assistance is performed by comparing a query image corresponding to an unknown second picked-up image relating to lesions with the reference image in the database, a reference image is created from the reference image by geometric transformation, or a query image is created from the query image by geometric transformation.
Abstract:
A digital camera includes an image capturing unit, an image composition unit, and a display control unit. The image capturing unit captures frames at predetermined time intervals. The image composition unit sequentially combines at least a part of image data from image data of a plurality of frames sequentially captured by the image capturing unit at predetermined time intervals. The display control unit performs control to sequentially display image data combined by the image composition unit while the image data of the frames are captured by the image capturing unit at predetermined time intervals.
Abstract:
A multi-class discriminating device for judging to which class a feature represented by data falls. The device has a first unit for generating plural first hierarchical discriminating devices for discriminating one from N, and a second unit for combining score values output respectively from the plural first hierarchical discriminating devices to generate a second hierarchical feature vector and for entering the second hierarchical feature vector to generate plural second hierarchical discriminating devices for discriminating one from N. When data is entered, the plural first hierarchical discriminating devices output score values, and these score values are combined together to generate the second hierarchical feature vector. When the second hierarchical feature vector is entered, the second hierarchical discriminating device which outputs the maximum score value is selected. The class corresponding to the selected second hierarchical discriminating device is discriminated as the class, into which the feature represented by the entered data falls.
Abstract:
The image acquisition unit 41 acquires an image including an object. By comparing information related to the shape of a relevant natural object that is included as the object in the target image acquired by the image acquisition unit 41, and information related to respective shapes of a plurality of types prepared in advance, at least one flower type for the natural object in question is selected. The secondary selection unit 43 then selects data of a representative image from among data of a plurality of images of different color, of the same flower type as prepared in advance, for each of at least one flower type selected by the primary selection unit 42, based on information related to color of the relevant natural object included as the object in the image acquired by the image acquisition unit 41.
an inputter for allowing a user to input a query image to initiate a diagnostic process, a processor configured to input the query image into a disease identifier for generating a plurality of indexes, acquire a malignant index of the plurality of indexes representing a possibility that an attribute of a disease of a diagnosis target area is malignant and a first disease attribute index of the plurality of indexes representing a possibility that an attribute of the disease of the diagnosis target area is a prescribed first disease attribute, and cause the acquired malignant index and the acquired first disease attribute index to be displayed in association with each other on a display.