Abstract:
A system 100 for enabling interactive annotation of an image 102, comprising a user input 160 for receiving a placement command 162 from a user, the placement command being indicative of a first placement location of a marker 140 in the image 102, and a processor 180 arranged for (i) applying an image processing algorithm to a region 130 in the image, the region being based on the first placement location, and the image processing algorithm being responsive to image portions which visually correspond to the marker 140 for establishing a plurality of match degrees between, on the one hand, the marker, and, on the other hand, a plurality of image portions within the region, (ii) establishing a second placement location in dependence on the plurality of match degrees and the respective plurality of image portions for matching the marker 140 to the region in the image, and (iii) placing the marker 140 at the second placement location in the image 102.
Abstract:
A method for processing image data includes obtaining a first set of 3D volumetric image data. The 3D volumetric image data includes a volume of voxels. Each voxel has an intensity. The method further includes obtaining a local voxel noise estimate for each of the voxels of the volume. The method further includes processing the volume of voxels based at least on the intensity of the voxels and the local voxel noise estimates of the voxels. An image data processor (124) includes a computer processor that at least one of: generate a 2D direct volume rendering from first 3D volumetric image data based on voxel intensity and individual local voxel noise estimates of the first 3D volumetric image data, or registers second 3D volumetric image data and first 3D volumetric image data based at least one individual local voxel noise estimates of second and first 3D volumetric image data sets.
Abstract:
The invention provides for a medical apparatus (100, 300, 400) comprising a subject support (102) configured for moving a subject (106) from a first position (124) to a second position (130) along a linear path (134). The subject support comprises a support surface (108) for receiving the subject. The subject support is further configured for positioning the subject support in at least one intermediate position (128). The subject support is configured for measuring a displacement (132) along the linear path between the first position and the at least one intermediate position. Each of the at least one intermediate position is located between the first position and the second position. The medical apparatus further comprises a camera (110) configured for imaging the support surface in the first position. Execution of machine executable instructions 116 cause the a processor (116) controlling the medical apparatus to: acquire (200) an initial image (142) with the camera when the subject support is in the first position; control (202) the subject support to move the subject support from the first position to the second position; acquire (204) at least one intermediate image (144) with the camera and the displacement for each of the at least one intermediate image as the subject support is moved from the first position to the second position; and calculate (206) a height profile (150, 600, 604) of the subject by comparing the initial image and the at least one intermediate image. The height profile is at least partially calculated using the displacement. The height profile is descriptive of the spatially dependent height of the subject above the support surface.
Abstract:
The invention provides for a medical instrument (100) comprising a processor (134) and a memory (138) containing machine executable instructions (140). Execution of the machine executable instructions causes the processor to: receive (200) a first magnetic resonance image data set (146) descriptive of a first region of interest (122) of a subject (118) and receive (202) at least one second magnetic resonance image data set (152, 152′) descriptive of a second region of interest (124) of the subject. The first region of interest at least partially comprises the second region of interest. Execution of the machine executable instructions further cause the processor to receive (204) an analysis region (126) within both the first region of interest and within the second region of interest. Execution of the machine executable instructions further cause the processor to construct (206) a cost function comprising an intra-scan homogeneity measure separately for the first magnetic resonance image data set and separately for each of the at least one second magnetic resonance image data set. The cost function further comprises an inter-scan similarity measure calculated using both the first magnetic resonance image data set and each of the at least one second magnetic resonance image data set. Execution of the machine executable instructions further cause the processor to by performing an optimization (208) of the cost function by calculating a first intensity correction map (154) for the first magnetic resonance image data set using an intensity correction algorithm within the analysis region and at least one second intensity correction map (156) for each of the at least one second magnetic resonance image data set within the analysis region. Execution of the machine executable instructions further cause the processor to calculate (210) a first corrected magnetic resonance image (158) descriptive of the analysis region using the first magnetic resonance image data set and the first intensity correction map. Execution of the machine executable instructions further cause the processor to calculate (212) at least one second corrected magnetic resonance image (160) descriptive of the analysis region using the at least at least one second magnetic resonance image data set and the at least one second intensity correction map.
Abstract:
A system and method are provided for interactive editing of a mesh which has been applied to a three-dimensional (3D) image to segment an anatomical structure shown therein. To facilitate the interactive editing of the applied mesh, a view of the 3D image is generated which shows a mesh part to be edited, with the view being established based on a local orientation of the mesh part. Advantageously, the view may be generated to be substantially orthogonally to the mesh part, or to a centerline of the anatomical structure which is determined as a function of the mesh part. Accordingly, an orthogonal view is established which facilitates the user in carrying out the editing action with respect to the mesh part. It is therefore not needed for the user to manually navigate through the 3D image to obtain a view which is suitable for mesh editing, which is typically time consuming.
Abstract:
A digital image (40) comprises pixels with intensities relating to different energy levels. A method for processing the digital image (40) comprises the steps of: receiving first image data (42a) and second image data (42b) of the digital image (40), the first image data (42a) encoding a first energy level and the second image data (42b) encoding a second energy level; determining a regression model (44) from the first image data (42a) and the second image data (42b), the regression model (44) establishing a correlation between intensities of pixels of the first image data (42a) with intensities of pixels of the second image data (42b); and calculating residual mode image data (46) from the first image data (42a) and the second image data (42b), such that a pixel of the residual mode image data (46) has an intensity based on the difference of an intensity of the second image data (42b) at the pixel and a correlated intensity of the pixel of the first image data (42a), the correlated intensity determinate by applying the regression model to the intensity of pixel of the first image data (42a).
Abstract:
Image processing apparatus 110 comprising a processor 120 for combining a time-series of three-dimensional [3D] images into a single 3D image using an encoding function, the encoding function being arranged for encoding, in voxels of the single 3D image, a change over time in respective co-located voxels of the time-series of 3D images, an input 130 for obtaining a first and second time-series of 3D images 132 for generating, using the processor, a respective first and second 3D image 122, and a renderer 140 for rendering, from a common viewpoint 154, the first and the second 3D image 122 in an output image 162 for enabling comparative display of the change over time of the first and the second time-series of 3D images.
Abstract:
A planning tool, system and method include a processor (114) and memory (116) coupled to the processor which stores a planning module (144). A user interface (120) is coupled to the processor and configured to permit a user to select a path through a pathway system (148). The planning module is configured to upload one or more slices of an image volume (111) corresponding to a user-controlled cursor point (108) guided using the user interface such that as the path is navigated the one or more slices are updated in accordance with a depth of the cursor point in the path.
Abstract:
The present invention relates to an X-ray radiograph apparatus (10). It is described to placing (110) an X-ray source (20) relative to an X-ray detector (30) to form an examination region for the accommodation of an object, wherein, a reference spatial coordinate system is defined on the basis of geometry parameters of the X-ray radiography apparatus. A camera (40) is located (120) at a position and orientation to view the examination region. A depth image of the object is acquired (130) with the camera within a camera spatial coordinate system, wherein within the depth image pixel values represent distances for corresponding pixels. A processing unit (50) transforms (140), using a mapping function, the depth image of the object within the camera spatial coordinate system to the reference spatial coordinate system, wherein, the camera position and orientation have been calibrated with respect to the reference spatial coordinate system to yield the mapping function that maps a spatial point within the camera spatial coordinate system to a corresponding spatial point in the reference spatial coordinate system. A synthetic image is generated (150) within the reference spatial coordinate system. The synthetic image is output (160) with an output unit (60).
Abstract:
A method and apparatus for segmenting a two-dimensional image of an anatomical structure includes acquiring (202) a three-dimensional model of the anatomical structure. The three-dimensional model includes a plurality of segments. The acquired three-dimensional model is adapted to align the acquired three-dimensional model with the two-dimensional image (204). The two-dimensional image is segmented by the plurality of segments of the adapted three-dimensional model.