Abstract:
A plurality of candidate points are extracted from image data. The plurality of candidate points are normalized, and a set of representative points composing form model that is most similar to set form is selected from the plurality of candidate points. Further, the candidate points and the form model are compared with each other, and correction is performed by adding a region forming structure or by deleting a region, or the like. Accordingly, the structure is detected in image data.
Abstract:
A processor sequentially acquires a plurality of radiation images of a subject having a body cavity into which an ultrasonic endoscope to which an ultrasonic imaging device is attached and to which a radiation impermeable marker is attached is inserted, sequentially acquires a plurality of two-dimensional ultrasound images corresponding to the plurality of radiation images, which are acquired by the ultrasonic imaging device, recognizes a position and a posture of the ultrasonic endoscope in the body cavity based on the marker included in each of the plurality of radiation images, and derives a three-dimensional ultrasound image from the plurality of two-dimensional ultrasound images based on the position and the posture of the ultrasonic endoscope recognized with respect to the plurality of radiation images.
Abstract:
A region specification apparatus, a region specification method, and a region specification program efficiently specify any object included in an input image. A convolutional neural network of an object specification unit generates a convolutional feature map from an input image (S0). A first discriminator selects an anchor based on a similarity in shape and size to a ground truth box including an object candidate from among a plurality of anchors having various shapes and various sizes. The first discriminator specifies an object candidate region in the input image based on the selected anchor.
Abstract:
An image acquisition unit sequentially acquires an endoscope image of a tubular structure having a plurality of branch structures, and an image generation unit generates an image of the tubular structure. A first certainty factor calculation unit calculates a first certainty factor indicating a possibility of presence of the endoscope within the tubular structure. A second certainty factor calculation unit calculates a second certainty factor indicating a possibility of presence of the endoscope by performing matching between the image of the tubular structure and each of the endoscope images at each of a plurality of positions within the tubular structure. A current position specifying unit specifies the current position of the endoscope based on the first and second certainty factors.
Abstract:
Employing a first image and a second image that represent a subject in different phases, based on a first insertion position and a first tip position of the first image and deformation information for deforming the first image so as to be aligned with the second image, a second insertion position of the second image corresponding to the first insertion position and a second tip position are specified such that a direction corresponding to a first insertion direction from the first insertion position toward the first tip position becomes a second insertion direction from the second insertion position toward the second tip position, and a second observation image obtained by visualizing the inside of the subject in a phase corresponding to the second image with the second tip position as a viewpoint is generated. Thereby, observation images of different phases as viewed through a virtual rigid surgical device are generated.
Abstract:
An image processing device includes a candidate point extracting unit configured to extract a plurality of candidate points belonging to a predetermined structure from image data, a shape model storing unit configured to store a shape model representing a known shape of the predetermined structure, the shape model being formed by a plurality of model labels having a predetermined connection relationship, and a corresponding point selecting unit configured to select a mapping relationship between the candidate points and the model labels from a set of candidate mapping relationships.
Abstract:
A problem inherent to radiographic images, which may occur when an independent component analysis technique is applied to energy subtraction carried out on radiographic images, is solved to achieve separation of image components to be separated with higher accuracy. As preprocessing before the independent component analysis, a spatial frequency band which contains the components to be separated is extracted, pixels of the radiographic images are classified into more than one subsets for each radiographic image based on a value of a predetermined parameter, and/or nonlinear pixel value conversion is applied to the radiographic images based on a value of the predetermined parameter. Alternatively, nonlinear independent component analysis is carried out according to a model using the predetermined parameter.