Abstract:
Ultrasound image devices, systems, and methods are provided. An ultrasound imaging system comprising a processor circuit in communication with an ultrasound probe comprising a transducer array, wherein the processor circuit is configured to receive, from the ultrasound probe, a first image of a patients anatomy; detect, from the first image, a first anatomical landmark at a first location along a scanning trajectory of the patients anatomy; determine, based on the first anatomical landmark, a steering configuration for steering the ultrasound probe towards a second anatomical landmark at a second location along the scanning trajectory; and output, to a display in communication with the processor circuit, an instruction based on the steering configuration to steer the ultrasound probe towards the second anatomical landmark at the second location.
Abstract:
A controller for qualifying image registration includes a memory that stores instructions; and a processor that executes the instructions. When executed by the processor, the instructions cause the controller to execute a process that includes receiving first imagery of a first modality and receiving second imagery of a second modality. The process executed by the controller also includes registering the first imagery of the first modality to the second imagery of the second modality to obtain an image registration. The image registration is subjected to an automated analysis as a qualifying process to qualify the image registration. The image registration is variably qualified when the image registration passes the qualifying process, and is not qualified when the image registration does not pass the qualifying process.
Abstract:
An electromagnetic (“EM”) tracking configuration system employs an EM quality assurance (“EMQA”) (30) and EM data coordination (“DC”) system (70). For the EMQA system (30), an EM sensor block (40) includes EM sensor(s) (22) positioned and oriented to represent a simulated electromagnetic tracking of interventional tool(s) inserted through electromagnetic sensor block (40) into an anatomical region. As an EM field generator (20) generates an EM field (21) encircling EM sensor(s) (22), an EMQA workstation (50) tests an EM tracking accuracy of an insertion of the interventional tool(s) through the EM sensor block (40) into the anatomical region. For the EMDC system (70), as EM field generator (20) generates EM field (21) encircling a mechanical interaction of EM calibration tool(s) (80) with a grid (120) for guiding interventional tool(s) through gird (120) into an anatomical region, the electromagnetic data coordination workstation (90) establishes a coordination system for electromagnetically tracking an insertion of the interventional tool(s) through grid (120) into the anatomical region.
Abstract:
The invention relates to a determination apparatus for determining the pose and shape of an introduction element like a catheter within a living being, wherein the introduction element (12) is adapted to be used by a brachytherapy apparatus for introducing a radiation source (10) close to a target object (11) to be treated. A position determination element (27) like guidewire (20) with an electromagnetic tracking element (16) is introduced into the introduction element (12) such that it is arranged at different locations within the introduction element (12), wherein the positions of the position determination element (27) within the introduction element (12) are determined. The determined positions are then acquired depending on the determined positions for determining the pose and shape of the introduction element within the living being. This can lead to a determination procedure with reduced user interaction, thereby simplifying the determination procedure for the user.
Abstract:
A system, apparatus and method for mesh registration including an extraction of a preoperative anatomical mesh from a preoperative anatomical image based on a base topology of an anatomical mesh template, an extraction of an intraoperative anatomical mesh from an intraoperative anatomical image based on a preoperative topology of the preoperative anatomical mesh derived from the base topology of an anatomical mesh template, and a registration of the preoperative anatomical image and the intraoperative anatomical image based on a mapping correspondence between the preoperative anatomical mesh and the intraoperative anatomical mesh established by an intraoperative topology of the intraoperative anatomical mesh derived from the preoperative topology of the preoperative anatomical mesh.
Abstract:
The invention relates to a brachytherapy apparatus (1) for applying a brachytherapy to a living object. The brachytherapy apparatus comprises a planning unit (14) for determining a placing plan defining placing positions and placing times for one or several radiation sources within the living object and close to a target region. The placing plan is determined such that the placing times are within a treatment time window determined by a treatment time window determination unit (13), wherein within the treatment time window a change of a spatial parameter of the living object caused by swelling is minimized. An adverse influence on the brachytherapy due to swelling can thereby be minimized, which improves the quality of the brachytherapy.
Abstract:
A system and method include a shape sensing enabled device (120) including one or more imaging devices (202), the shape sensing enabled device coupled to at fiber (122). A shape sensing module (132) is configured to receils from the at least one optical fiber within a structure and interpret the optical signals to determine a shape of the shape sensing enabled device. A device positioning module (134) is configured to determine position information of the one or more imaging devices based upon one or more relationships between the at least one optical fiber and the one or more imaging devices. A mapping module (136) is configured to register frames of reference of the at least one optical fiber, the shape sensing enabled device, and a mapping system of a target device (124) to provide an adjusted position of the target device based on the position information.
Abstract:
An intervention instrument (60) employs a shaft (61) and a shaft tracker (62) partially or completely encircling the shaft (61) and movable to a primary tracking position along the shaft (61) between a distal tip and a proximal hub of the shaft (61). The primary tracking position is derived from a distance from an entry point of the distal tip into an anatomical region to a target location of the distal tip within the anatomical region. The shaft tracker (62) includes a primary position sensor (63) operable for tracking the shaft tracker (62) relative to the anatomical region at or offset from the primary tracking position.
Abstract:
Various embodiments of the present disclosure include a thermal ablation probabilistic controller (30) employing an ablation probability model (32) trained to render a pixel ablation probability for each pixel of an ablation scan image illustrative of a static anatomical ablation. In operation, the thermal ablation probabilistic controller (30) spatially aligns a temporal sequence of ablation scan datasets representative of a dynamic anatomical ablation, and applies the ablation probability model (32) to the spatial alignment of the temporal sequence of ablation scan datasets to render the pixel ablation probability for each pixel of the ablation scan image illustrative of the static anatomical ablation.
Abstract:
The present disclosure describes systems configured to recognize indicators of a medical condition within a diagnostic image and predict the progression of the medical condition based on the recognized indicators. The systems can include neural networks trained to extract disease features from diagnostic images and neural networks configured to model the progression of such features at future time points selectable by a user. Modeling the progression may involve factoring in various treatment options and patient-specific information. The predicted outcomes can be displayed on a user interface customized to specific representations of the predicted outcomes generated by one or more of the underlying neural networks. Representations of the predicted outcomes include synthesized future images, probabilities of clinical outcomes, and/or descriptors of disease features that may be likely to develop over time.