Abstract:
The image processing apparatus includes a boundary line extraction means that extracts a boundary line of a layer from an input image obtained by capturing an image of a target object composed of a plurality of layers. The boundary line extraction means is configured to first extract boundary lines at upper and lower ends of the target object, limit a search range using the extracted boundary lines at the upper and lower ends to extract another boundary line, limit the search range using an extraction result of the other boundary line to extract still another boundary line, and then sequentially repeat similar processes to extract subsequent boundary lines. In another aspect, the image processing apparatus includes a boundary line extraction means that extracts a boundary line of a layer from an input image obtained by capturing an image of a target object composed of a plurality of layers and a search range setting means that utilizes an already extracted boundary line extracted by the boundary line extraction means to dynamically set a search range for another boundary line. According to such an image processing apparatus and image processing method, boundary lines of layers can be extracted with a high degree of accuracy from a captured image of a target object composed of a plurality of layers.
Abstract:
A three-dimensional tomographic image (B) is formed which is composed of a plurality of two-dimensional tomographic images obtained by scanning an ocular fundus. A contour of a certain 2D region (M1, M2, M3, M4) in the tomographic image is determined for each tomographic image, and the volume of a certain 3D region is calculated through correcting each area of the certain 2D region defined by the determined contour or its accumulated value using an image correction coefficient in accordance with the diopter of the subject's eye. Even for subjects' eyes of different diopters, the influence of the diopter correction is eliminated and a quantitative comparison of subjects' eyes of different diopters is possible.
Abstract:
The tomographic image capturing device of the present invention includes a tomographic image capturing means that scans measurement light on a subject's eye fundus (E) to capture tomographic images of the subject's eye fundus and an image processing means that compresses a picture of the captured tomographic images in a scan direction to generate a new tomographic picture. The tomographic image capturing means performs scan at a second scan pitch (PL) narrower than a first scan pitch (PH) to capture the tomographic images of the subject's eye fundus. The image processing means compresses the picture (B11) of the tomographic images captured at the second scan pitch (PL) in the scan direction to generate the new tomographic picture (B12). The measurement width in the scan direction of the new tomographic picture (B12) is a width of a picture corresponding to a measurement width in the scan direction of a tomographic picture (Bn (n=1 to 10)) obtained by scan at the first scan pitch (PH).
Abstract:
An average image producing means 52 produces an average image from all or some of a plurality of images captured at the same location. A noise extracting means 53 extracts a noise pixel on the basis of the result of a comparison between the pixel values of the pixels in the captured images and the pixel values of the pixels at the same position in the average image. An interpolating means 54 interpolates the pixel value of the noise pixel included in the captured images using the pixel values of other pixels to produce a noise-eliminated image.
Abstract:
Edges of layers are detected from an input image to create a boundary line candidate image that represents the detected edges. A luminance value of the input image is differentiated to create a luminance value-differentiated image that represents luminance gradient of the layers. An evaluation score image is created which is obtained by weighting calculation at an optimum ratio between a boundary line position probability image and the luminance value-differentiated image. The boundary line position probability image is obtained from the boundary line candidate image and an existence probability image that represents existence of a boundary line to be extracted. A route having the highest total evaluation score is extracted as the boundary line. According to such an image processing apparatus and image processing method, boundary lines of layers can be extracted with a high degree of accuracy from a captured image of a target object composed of a plurality of layers.
Abstract:
The tomographic image capturing device of the present invention comprises a display means (18) configured to: split light from a light source (11) into measurement light and reference light and cause the measurement light and the reference light to be incident to an object (E) and a reference object (49), respectively; capture tomographic images of the object (E) on the basis of interference light generated by superposition of the measurement light reflected from the object (E) and the reference light reflected from the reference object (49); and display tomographic pictures of the object generated on the basis of the captured tomographic images. The tomographic image capturing device has a first image capturing mode and a second image capturing mode. The first image capturing mode is a mode in which the measurement light is two-dimensionally scanned by raster scan to be incident to the object (E) and the tomographic images of the object (E) are captured. The second image capturing mode is a mode in which the measurement light is two-dimensionally scanned by raster scan to be incident to the object (E) and the tomographic images of the object (E) are captured. The raster scan in the second image capturing mode is thinned from the raster scan in the first image capturing mode. The display means (18) is configured to be switchable between a first display mode and a second display mode. The first display mode is a mode in which a plurality of tomographic pictures including a region of interest of the object (E) is selected from among the tomographic pictures generated on the basis of the tomographic images captured in the second image capturing mode and only the selected plurality of tomographic pictures is displayed. The second display mode is a mode in which all of the tomographic pictures generated on the basis of the tomographic images captured in the second image capturing mode are in turn displayed. The capturing of the tomographic images in the first image capturing mode is performed after separately performing a first adjustment operation and a second adjustment operation for adjustment of an image capturing condition necessary for capturing the tomographic images in the first image capturing mode. The first adjustment operation is based on the tomographic pictures displayed in the first display mode. The second adjustment operation is based on the tomographic pictures displayed in the second display mode.
Abstract:
Image processing device (50) of the present invention comprises: an enhancement processing means (51) that enhances a speckle pattern in an ocular fundus tomographic image; a region-of-interest setting means (52) that sets a desired region in the ocular fundus tomographic image with the enhanced speckle pattern as a region-of-interest; a feature value extracting means (53) that extracts a feature value of the speckle pattern in the region-of-interest; and a disease determining means (54) that makes disease determination for an ocular fundus on the basis of the feature value. Image processing method of the present invention comprises: a step (S3) of enhancing a speckle pattern in an ocular fundus tomographic image; a step (S4) of setting a desired region in the ocular fundus tomographic image with the enhanced speckle pattern as a region-of-interest; a step (S5) of extracting a feature value of the speckle pattern in the region-of-interest; and a step (S6) of making disease determination for an ocular fundus on the basis of the feature value.