Abstract:
An identification apparatus includes a processor and a memory configured to store a program to be executed by the processor. The processor acquires first image data obtained by capturing of an image of an affected area included in a skin or a mucosa by receiving first reception light. The first reception light is reflection light reflected from the affected area irradiated with first irradiation light including white light. The processor further acquires second image data obtained by capturing of an image of the affected area by receiving second reception light. The second reception light is light including light generated by fluorescent reaction in the affected area irradiated with second irradiation light. The second irradiation light includes light that allows the affected area to show fluorescent reaction when the affected area is irradiated with the light. The processor identifies the affected area based on the first image data and the second image data.
Abstract:
Provided are a diagnosis assisting device that facilitates a user to grasp a difference of an affected area to perform a highly precise diagnosis assistance, an image processing method in the diagnosis assisting device, and a program. An image processing method in a diagnosis assisting device that diagnoses lesions from a picked-up image includes (A) performing an image processing on the picked-up image. In (A), a peripheral area other than a diagnosis area that has a high probability as diseases in the picked-up image is set to be a measuring area when an image correction is performed.
Abstract:
An image generating apparatus including a processor and a memory having instructions stored therein which are executable by the processor to cause the processor to: cause a display to separately and simultaneously display (i) a panoramic image in progress corresponding to images which are captured by an image capturing unit, and (ii) a live-view image corresponding to an image that is newly captured by the image capturing unit while images are being captured by the image capturing unit; receive an instruction to terminate image capture processing for generating panoramic image data while the panoramic image in progress and the live-view image are being displayed; and perform, in response to reception of the instruction, control to record, as a completed panoramic image, panoramic image data generated from the images captured by a time when reception of the instruction is finished, even if images have not yet been captured in a predetermined capture range set in advance for a full panoramic image to be generated.
Abstract:
In a histogram intersection calculating apparatus, a histogram intersection calculating unit calculates a histogram intersection to compare histograms of query data and target data to obtain a score value of the histogram intersection. A calculation controlling unit makes the histogram intersection calculating unit calculate the histogram intersection in descending order of bin number of the histogram of the query data from the bin number having the maximum frequency value by using frequency values in the bin numbers of the query data and frequency values in the bin numbers of the target data to obtain a score value of the histogram intersection.
Abstract:
In the present invention, a database has feature information stored in association with flower sample images flower names, leaf sample images, and images indicating attention points for narrowing down the flower names. An extracting section extracts flower sample images having a high similarity to the image of the imaged flower as candidate images by comparing feature information of the image of the imaged flower and feature information stored in the database. A control section causes the image of the imaged flower, the extracted candidate images, flower names corresponding to the candidate images, and attention points for narrowing down the candidate images to be arranged and displayed on a display section, and changes the candidate images to their respective leaf sample images for display. The control section also changes the candidate images to images indicating their respective attention points and causes them to be displayed on the display section.
Abstract:
An image processing method in a diagnosis assisting device that diagnoses lesions from a picked-up image, the method including A) performing an image correction on the picked-up image for diagnosis, and B) obtaining an input image to an identifier that identifies diseases based on the picked-up image having undergone the image correction. In A), when a brightness correction is performed as the image correction, a peripheral area other than a diagnosis area that has a high probability as diseases in the picked-up image is set to be a measuring area, a brightness histogram is created relative to the measuring area, a correction gain value is calculated based on a peak value of the created brightness histogram, and each of pixels in a color space is corrected by using the calculated correction gain value.
Abstract:
A digital camera includes an image capturing unit, an image composition unit, and a display control unit. The image capturing unit captures frames at predetermined time intervals. The image composition unit sequentially combines at least a part of image data from image data of a plurality of frames sequentially captured by the image capturing unit at predetermined time intervals. The display control unit performs control to sequentially display image data combined by the image composition unit while the image data of the frames are captured by the image capturing unit at predetermined time intervals.
Abstract:
An image capture apparatus includes: an image capturing unit; a first image capture controller configured to control the image capturing unit to capture a set of a plurality of images while changing exposure time for each of the images; a second image capture controller configured to obtain a plurality of sets of the images; an addition combination section configured to perform image alignment on the images contained in each of the sets and to perform addition combination on the aligned images to generate a combined image; a combination controller configured to control the addition combination section to perform the image alignment and the addition combination for each of the sets to obtain a plurality of combined images; a selector configured to evaluate the combined images and to select one of the combined images most highly evaluated as a selected image; and a storage configured to store the selected image.
Abstract:
In an object searching apparatus for searching through a database of objects, an image pickup unit repeatedly shoots a subject with the optical axis moved to obtain plural pieces of image data. A distance from the image pickup unit to the subject is calculated based on the plural pieces of image data, and a main object of the subject is clipped from the obtained image data. A calculating unit calculates a real size of the main object of the subject based on a size of the clipped main object on the image data, the calculated distance from the image pickup unit to the subject and a focal length of the image pickup unit. A searching unit accesses the database to search for a sort of the main object of the subject, using the calculated real size of the main object.