Abstract:
A hole portion detection unit detects a hole portion of the bronchus from at least one of a first endoscope image or a second endoscope image temporally earlier than the first endoscope image. A first parameter calculation unit calculates a first parameter indicating the amount of parallel movement for matching the hole portions of the two endoscope images with each other. A second parameter calculation unit performs alignment between the two endoscope images based on the first parameter, and calculates a second parameter including the amount of enlargement and reduction. Based on the two parameters, a movement amount calculation unit calculates the amount of movement of the endoscope from the acquisition time of the second endoscope image to the acquisition time of the first endoscope image.
Abstract:
An extraction unit extracts a target object from a three-dimensional image, and a feature point detection unit detects at least one feature point included in the three-dimensional image. A reference axis setting unit sets a reference axis in the three-dimensional image based on the feature amount, and a two-dimensional image generation unit generates a two-dimensional image by projecting a target object, which is included in the three-dimensional image, in a specific projection direction using the reference axis as a reference. A classification unit classifies each pixel of the target object into a plurality of classes based on the two-dimensional image.
Abstract:
Three-dimensional data in which different three-dimensional patterns are respectively added to a plurality of positions of the three-dimensional data representing a three-dimensional object in a three-dimensional coordinate system is created, and the respective added three-dimensional patterns are stored in association with the positions in the three-dimensional data to which the three-dimensional patterns has been added. The three-dimensional model is shaped using the created three-dimensional data. A pattern is recognized in a captured image obtained by imaging the three-dimensional model that is shaped and of which a desired part is excised or incised, a three-dimensional pattern including the recognized pattern is searched for from the stored three-dimensional patterns, and the position in the three-dimensional data stored in association with the three-dimensional pattern that has been searched for is associated with a position on the captured image in which the pattern has been recognized.
Abstract:
[Objective]To enable a three dimensional image to be accurately classified into a plurality of classes with a small amount of calculations, in an image classifying apparatus, an image classifying method, and an image classifying program.[Constitution]A three dimensional image is classified into a plurality of classes by a convolutional neural network, in which a plurality of processing layers are connected hierarchically. The convolutional neural network includes: a convoluting layer that administers a convoluting process on each of a plurality of two dimensional images, which are generated by the neural network administering a projection process on the three dimensional image according to a plurality of processing parameters; and a pooling layer that pools the values of the same position within each of the plurality of two dimensional images which have undergone the convoluting process.
Abstract:
When binary labeling is performed, an outline specification unit specifies a first outline present toward a target region and a second outline present toward a non-target region, and which have shapes similar to an outline of the target region. A voxel selection unit selects an N number of voxels constituting all of the first outline and the second outline. The energy setting unit sets N-order energy when a condition that all of the voxels of the first outline belong to the target region and all of the voxels of the second outline belong to the non-target region is satisfied smaller than the N-order energy when the condition is not satisfied. After then, labeling is performed by minimizing energy.
Abstract:
An extraction model is constituted of an encoder that extracts a feature amount of a first image of a first representation format to derive a feature map of the first image, a first decoder that derives a second virtual image of a second representation format different from the representation format of the first image on the basis of the feature map, a first discriminator that discriminates a representation format of an input image and whether the input image is a real image or a virtual image, and outputs a first discrimination result, a second decoder that extracts a region of interest of the first image on the basis of the feature map, and a second discriminator that discriminates whether an extraction result of the region of interest by the second decoder is an extraction result of a first image with ground-truth mask or an extraction result of a first image without ground-truth mask, and outputs a second discrimination result.
Abstract:
Provided are a learning method and a learning system of a generative model, a program, a learned model, and a super resolution image generating device that can handle input data of any size and can suppress the amount of calculation at the time of image generation. A learning method according to an embodiment of the present disclosure is a learning method for performing machine learning of a generative model that estimates, from a first image, a second image including higher resolution image information than the first image, the method comprising using a generative adversarial network including a generator which is the generative model and a discriminator which is an identification model that identifies whether provided data is data of a correct image for learning or data derived from an output from the generator and implementing a self-attention mechanism only in a network of the discriminator among the generator and the discriminator.
Abstract:
A determination processing unit determines a disease region in a medical image including an axisymmetric structure. A first determination section of the determination processing unit generates a feature amount map of the medical image from the medical image. A second determination section second inverts the feature amount map with reference to a symmetry axis to generate an inverted feature amount map. A third determination section superimposes the feature amount map and the inverted feature amount map on each other and determines the disease region in the medical image using the feature amount map and the inverted feature amount map superimposed on each other.
Abstract:
When using a graph cut process for binary labeling, labeling unit selects N (>3) pixels in image data in such a manner to represent a predetermined shape in the image, minimize the high-order energy of the Nth order or greater in which the pixel values of the N pixels are variables, and performs labeling.
Abstract:
The filtering unit calculates an evaluation matrix by performing filtering on each pixel position of an image representing a hollow structure using a second order partial differential of a function representing a hollow sphere. The evaluation unit calculates an evaluation value of at least one or more of point-like structureness, line-like structureness, and plane-like structureness at the pixel position based on the calculated evaluation matrix.