Abstract:
A processor derives, at each pixel of a plurality of tubular structures, running vectors representing running directions of the plurality of tubular structures based on a medical image including the plurality of tubular structures, and separates the plurality of tubular structures using the running vectors.
Abstract:
Provided are a linear structure extraction device, a method, a program, and a learned model which can detect a linear structure in an image. A linear structure extraction device according to an embodiment of the present disclosure includes a learning model that is learned to receive an input of the image and output, as a prediction result, one or more element points which constitute the linear structure from the image, in which the learning model includes a first processing module that receives the image and generates a feature map representing a feature amount of the image by convolution processing, and a second processing module that calculates a shift amount from a unit center point to the element point of the linear structure closest to the unit center point, for each unit obtained by dividing the feature map into a plurality of the units including regions having a predetermined size in a grid pattern.
Abstract:
Provided are a learning method, a learning device, a generative model, and a program that generate an image including high resolution information without adjusting a parameter and largely correcting a network architecture even in a case in which there is a variation of the parts of an image to be input. Only a first image is input to a generator of a generative adversarial network that generates a virtual second image having a relatively high resolution by using the first image having a relatively low resolution, and a second image for learning or the virtual second image and part information of the second image for learning or the virtual second image are input to a discriminator that identifies the second image for learning and the virtual second image.
Abstract:
A region specification apparatus specifies a region of an object which is included in an input image and which includes a plurality of subclass objects having different properties. The region specification apparatus includes a first discriminator that specifies an object candidate included in the input image. The first discriminator has a component configured to predict at least one of movement or transformation of a plurality of anchors according to the property of the subclass object and specify an object candidate region surrounding the object candidate.
Abstract:
Provided are an image processing apparatus, an image processing method, and a program that can suppress an error in the segmentation of a medical image. An image processing apparatus includes: a segmentation unit (42) that applies deep learning to perform segmentation which classifies a medical image (200) into a specific class on the basis of a local feature of the medical image; and a global feature classification unit (46) that applies deep learning to classify the medical image into a global feature which is an overall feature of the medical image. The segmentation unit shares a weight of a first low-order layer which is a low-order layer with a second low-order layer which is a low-order layer in the global feature classification unit.
Abstract:
Provided is a machine learning device and method that enables machine learning of labeling, in which a plurality of labels are attached to volume data at one effort with excellent accuracy, using training data having label attachment mixed therein.A probability calculation unit (14) calculates a value (soft label) indicating a likelihood of labeling of a class Ci for each voxel of a second slice image by means of a learned teacher model (13a). A detection unit (15) detects “bronchus” and “blood vessel” for the voxels of the second slice image using a known method, such as a region expansion method and performs labeling of “bronchus” and “blood vessel”. A correction probability setting unit (16) replaces the soft label with a hard label of “bronchus” or “blood vessel” detected by the detection unit (15). A distillation unit (17) performs distillation of a student model (18a) from the teacher model (13a) using the soft label after correction by means of the correction probability setting unit (16). With this, the learned student model (18a) is obtained.
Abstract:
For assigning a binary label representing belonging to a target region or not to each pixel in an image: a predicted shape of the target region is set; a pixel group including N pixels is selected, where N is a natural number of 4 or more, which have a positional relationship representing the predicted shape; and an energy function is set, which includes an N-th order term in which a variable is a label of each pixel of the pixel group, so that a value of the N-th order term is at a minimum value when a combination of the labels assigned to the pixels of the pixel group is a pattern matching the predicted shape, and increases in stages along with an increase in a number of pixels to which a label different from the pattern is assigned. The labeling is performed by minimizing the energy function.
Abstract:
Candidate points belonging to a predetermined structure are extracted from image data DV. A shape model which represents a known shape of the predetermined structure and is formed by model labels having a predetermined connection relationship is obtained. Corresponding points corresponding to the model labels are selected from the extracted candidate points under the following constraints: (a) each model label is mapped with only one of the candidate points or none of the candidate points; (b) each candidate point is mapped with only one of the model labels or none of the model labels; and (c) when a path between two candidate points which are mapped with each pair of the model labels connected with each other is determined, each candidate point which is mapped with none of the model labels is included in only one of the determined paths or none of the determined paths.
Abstract:
An image generation device derives, for a subject including a specific structure, a subject model representing the subject by deriving each feature amount of the target image having the at least one representation format and combining the feature amounts based on the target image. A latent variable derivation unit derives a latent variable obtained by dimensionally compressing a feature of the subject model according to the target information based on the target information and the subject model. A virtual image derivation unit outputs a virtual image having the representation format represented by the target information based on the target information, the subject model, and the latent variable.
Abstract:
A prediction apparatus includes a learning section that performs machine learning in which, with respect to a combination of different types of captured images obtained by imaging the same subject, one captured image is set to an input and another captured image is set to an output to generate a prediction model; and a controller that performs a control for inputting a first image to the prediction model as an input captured image and outputting a predicted second image that is a captured image having a type different from that of the input captured image.