Abstract:
The invention provides for a method for switching between fields of view of an ultrasound probe. The method begins by obtaining an anatomical model representing a region of interest of a subject and establishing a first field of view relative to an ultrasonic probe, wherein the first field of view comprises an initial portion of the region of interest. Ultrasound data is then obtained from the first field of view by way of the ultrasonic probe and a first anatomical feature is identified within the first field of view based on the ultrasound data. A location in digital space of the first field of view relative to the anatomical model is determined based on the first anatomical feature. A second field of view is then established based on the anatomical model and the first field of view, wherein the first field of view functions as a reference field of view. The field of view is then switched from the first field of view to the second field of view.
Abstract:
A method is provided for generating an ultrasound image of an anatomical region having a volume. First image low resolution image data is enhanced by adapting a 3D anatomical model to the image data to generate a second, greater, quantity of ultrasound image data in respect of the anatomical region. The enhanced volumetric information is then displayed. An anatomical model is thus used to complete partial image data thereby increasing the image resolution, so that a high resolution volumetric image can be displayed with a reduced image capture time.
Abstract:
An image processing system and related method. The system comprises an input interface (IN) configured for receiving an n[≥2]-dimensional input image with a set of anchor points defined in same, said set of anchor points forming an input constellation. A constellation modifier (CM) is configured to modify said input constellation into a modified constellation. A constellation evaluator (CE) configured to evaluate said input constellation based on said hyper-surface to produce a score. A comparator (COMP) is configured to compare said score against a quality criterion. Through an output interface (OUT) said constellation is output if the score meets said criterion. The constellation suitable to define a segmentation for said input image.
Abstract:
An ultrasound imaging apparatus is disclosed comprising an ultrasound acquisition unit (10) connected to a plurality of ultrasound probes (42, 44, 46, 70, 72, 74) each for providing ultrasound data suitable for ultrasound imaging of a patient (12) in a field of view (32) of the ultrasound probes. The ultrasound imaging apparatus further comprises a detection unit (20) for detecting an anatomical object (36) of the patient in the field of view on the basis of the ultrasound data received from at least one of the ultrasound probes and for determining a spatial relationship of the anatomical object and each of the ultrasound probes. A selection unit (24) coupled to the detection unit selects at least one of the ultrasound probes on the basis of an acquisition quality of at least one physical parameter detectable from the ultrasound data, wherein the acquisition quality is determined on the basis of the spatial relationship (38) of the anatomical object and each of the ultrasound probes and at least one anatomical feature of the anatomical object. An evaluation unit coupled to the ultrasound acquisition unit receives the ultrasound data from the at least one selected ultrasound probe and determines the at least one physical parameter on the basis of the ultrasound data received from the selected ultrasound probe.
Abstract:
The present invention relates to a method for segmenting MR Dixon image data. A processor and a computer program product are also disclosed for use in connection with the method. The invention finds application in the MR imaging field in general and more specifically may be used in the generation of an attenuation map to correct for attenuation by cortical bone during the reconstruction of PET images. In the method, a surface mesh is adapted to a region of interest by: for each mesh element in the surface mesh: selecting a water target position based on a water image feature response in the MR Dixon water image;selecting a fat target position based on a fat image feature response in the MR Dixon fat image; and displacing each mesh element from its current position to a new position based on both its water target position and its corresponding fat target position.
Abstract:
A method is provided for adapting a 3D field of view (FOV) in ultrasound data acquisition so as to minimize the FOV volume in a manner that is controlled and precise. The method comprises defining a volumetric region across which 3D ultrasound data is desired, and then adapting the data acquisition field of view (FOV) in dependence upon the defined volumetric region, to encompass the region. This is achieved based on adapting a scan line length (or scan depth) of each individual scan line based on the defined volumetric region. In some embodiments, the volumetric region may be defined based on anatomical segmentation of a reference ultrasound dataset acquired in an initial step, and setting the volumetric region in dependence upon boundaries of an identified object of interest. The volumetric region may in a subset of embodiments be set as the region occupied by a detected anatomical object of interest.
Abstract:
A mechanism for generating information usable for identifying a risk of arrhythmia. A plurality of heart models are constructed from medical imaging data of a subject's heart, each heart model being defined by a set of different values for one or more modifiable properties. The modifiable properties are those used in the generation of the heart model or in the definition of the heart model, and influence or affect simulation results using the heart model. Each heart model undergoes a plurality of simulations, each simulation being a simulation of a response of the heart model to a (simulated) stimulation of electricity at a different pacing location of the heart. Output data is generated containing indicators of all simulation results.
Abstract:
A system (SYS) for supporting a medical procedure, comprising an interface (IN) for receiving at least one medical input signal that describes a state of a target anatomy. A signal analyzer (SA) is configured to analyze the medical input signal to determine a time window for deployment of a cardiovascular device (CL) to be deployed by a deployment.
Abstract:
Disclosed are an imaging system (10) or an interventional tool, such as a catheter (20), having a first ultrasound transducer array (23) and a second ultrasound transducer array (21) spaced by a fixed distance (D) from each other; wherein both arrays may be used to generate diagnostic images; and a processing arrangement (31, 32) to process a first sensor signal indicative of the first array imaging a reference location (X) at a first point in time, and to process a second sensor signal indicative of the second array imaging the reference location at a second point in time; and determine a translation (pullback) speed of the catheter from the set distance and the difference between the first point in time and the second point in time. Alternatively, a catheter may be provided comprising an ultrasound transducer array at a distal end of the catheter, and two pressure sensors for determining the translation speed.
Abstract:
A model-based segmentation system includes a plurality of clusters (48), each cluster being formed to represent an orientation of a target to be segmented. One or more models (140) are associated with each cluster. The one or more models include an aspect associated with the orientation of the cluster, for example, the appearance of the target to be segmented. A comparison unit (124), configured in memory storage media, is configured to compare an ultrasound image to the clusters to determine a closest matching orientation and is configured to select the one or more models based upon the cluster with the closest matching orientation. A model adaptation module (126) is configured to adapt the one or more models to the ultrasound image.