Abstract:
Provided are a diagnosis assisting device, an imaging processing method in the diagnosis assisting device, and a non-transitory storage medium having stored therein a program that facilitate a grasp of a difference in an diseased area to perform a highly precise diagnosis assistance. According to an image processing method in a diagnosis assisting device that diagnoses lesions from a picked-up image, a reference image corresponding to a known first picked-up image relating to lesions is registered in a database, and when a diagnosis assistance is performed by comparing a query image corresponding to an unknown second picked-up image relating to lesions with the reference image in the database, a reference image is created from the reference image by geometric transformation, or a query image is created from the query image by geometric transformation.
Abstract:
A multi-class identifier identifies a kind of an imager, and identifies in detail with respect to a specified kind of a group. The multi-class identifier includes: an identification fault counter providing the image for test that includes any of class labels to the kind identifiers so that the kind identifiers individually identify the kind of the provided image, and counting, for a combination of arbitrary number of kinds among the plurality of kinds, the number of times of incorrect determination in the arbitrary number of kinds that belongs to the combination; a grouping processor, for a group of the combination for which count result is equal to or greater than a predetermined threshold, adding a group label corresponding to the group to the image for learning that includes the class label corresponding to any of the arbitrary number of kinds that belongs to the group.
Abstract:
The image acquisition unit 41 acquires an image including an object. By comparing information related to the shape of a relevant natural object that is included as the object in the target image acquired by the image acquisition unit 41, and information related to respective shapes of a plurality of types prepared in advance, at least one flower type for the natural object in question is selected. The secondary selection unit 43 then selects data of a representative image from among data of a plurality of images of different color, of the same flower type as prepared in advance, for each of at least one flower type selected by the primary selection unit 42, based on information related to color of the relevant natural object included as the object in the image acquired by the image acquisition unit 41.
Abstract:
An image processing method in a diagnosis assisting device that diagnoses lesions from a picked-up image, the method including A) performing an image correction on the picked-up image for diagnosis, and B) obtaining an input image to an identifier that identifies diseases based on the picked-up image having undergone the image correction. In A), when a brightness correction is performed as the image correction, a peripheral area other than a diagnosis area that has a high probability as diseases in the picked-up image is set to be a measuring area, a brightness histogram is created relative to the measuring area, a correction gain value is calculated based on a peak value of the created brightness histogram, and each of pixels in a color space is corrected by using the calculated correction gain value.
Abstract:
An identification apparatus includes a first one-vs.-rest identifier, a second one-vs.-rest identifier, and a corrector. The first one-vs.-rest identifier identifies a first class among a plurality of classes. The second one-vs.-rest identifier identifies a second class different from the first class among the plurality of classes. The corrector corrects an identification result provided by the first one-vs.-rest identifier using the identification result provided by the second one-vs.-rest identifier.
Abstract:
An image acquisition unit of a machine learning device acquires n learning images assigned with labels to be used for categorization (n is a natural number larger than or equal to 2). A feature vector acquisition unit acquires a feature vector representing a feature from each of the n learning images. A vector conversion unit converts the feature vector for each of the n learning images to a similarity feature vector based on a similarity degree between the learning images. A classification condition learning unit learns a classification condition for categorizing the n learning images, based on the similarity feature vector converted by the vector conversion unit and the label assigned to each of the n learning images. A classification unit categorizes unlabeled testing images in accordance with the classification condition learned by the classification condition learning unit.
Abstract:
An identification apparatus includes a processor and a memory configured to store a program to be executed by the processor. The processor acquires first image data obtained by capturing of an image of an affected area included in a skin or a mucosa by receiving first reception light. The first reception light is reflection light reflected from the affected area irradiated with first irradiation light including white light. The processor further acquires second image data obtained by capturing of an image of the affected area by receiving second reception light. The second reception light is light including light generated by fluorescent reaction in the affected area irradiated with second irradiation light. The second irradiation light includes light that allows the affected area to show fluorescent reaction when the affected area is irradiated with the light. The processor identifies the affected area based on the first image data and the second image data.
Abstract:
A multi-class discriminating device for judging to which class a feature represented by data falls. The device has a first unit for generating plural first hierarchical discriminating devices for discriminating one from N, and a second unit for combining score values output respectively from the plural first hierarchical discriminating devices to generate a second hierarchical feature vector and for entering the second hierarchical feature vector to generate plural second hierarchical discriminating devices for discriminating one from N. When data is entered, the plural first hierarchical discriminating devices output score values, and these score values are combined together to generate the second hierarchical feature vector. When the second hierarchical feature vector is entered, the second hierarchical discriminating device which outputs the maximum score value is selected. The class corresponding to the selected second hierarchical discriminating device is discriminated as the class, into which the feature represented by the entered data falls.
Abstract:
In the present invention, a database has feature information stored in association with flower sample images flower names, leaf sample images, and images indicating attention points for narrowing down the flower names. An extracting section extracts flower sample images having a high similarity to the image of the imaged flower as candidate images by comparing feature information of the image of the imaged flower and feature information stored in the database. A control section causes the image of the imaged flower, the extracted candidate images, flower names corresponding to the candidate images, and attention points for narrowing down the candidate images to be arranged and displayed on a display section, and changes the candidate images to their respective leaf sample images for display. The control section also changes the candidate images to images indicating their respective attention points and causes them to be displayed on the display section.
Abstract:
Provided are a diagnosis assisting device that facilitates a user to grasp a difference of an affected area to perform a highly precise diagnosis assistance, an image processing method in the diagnosis assisting device, and a program. An image processing method in a diagnosis assisting device that diagnoses lesions from a picked-up image includes (A) performing an image processing on the picked-up image. In (A), a peripheral area other than a diagnosis area that has a high probability as diseases in the picked-up image is set to be a measuring area when an image correction is performed.