Abstract:
The following relates generally to systems and methods of transesophageal echocardiography (TEE) automation. Some aspects relate to a TEE probe with ultrasonic transducers on a distal end of the TEE probe. In some implementations, if a target is in a field of view (FOV) of the ultrasonic transducers, an electronic beam steering of the probe is adjusted; if the target is at an edge of the FOV, both the electronic beam steering and mechanical joints of the probe are adjusted; and if the target is not in the FOV, only the mechanical joints of the probe are adjusted.
Abstract:
A OSS guiding and monitoring system employs an interventional device (40) including an integration of a OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region. The OSS guiding and monitoring system further employs a OSS guiding controller (90) for controlling a reconstruction of a shape of the interventional device (40) within the anatomical region responsive to a generation of the shape sensing data by the OSS sensor (20), and a OSS monitoring controller (100) for controlling a monitoring of a degree of folding and/or a degree of twisting of the interventional device (40) within the anatomical region.
Abstract:
A system and method for determining the position of a non-shape-sensed guidewire (102) and for visualizing the guidewire. The system includes a shape-sensed catheter (104) having a lumen (103) that is configured to receive the non-shape-sensed guidewire. A measurement module (122) is configured to measure a distance that the non-shape-sensed guidewire moves. The measurement module may receive signals from a sensor (124) associated with a measurement assembly that is configured to receive at least a portion of the non-shape-sensed guidewire and/or the shape-sensed catheter. A location module (126) is configured to determine a position of the non-shape-sensed guidewire. The system is configured to generate a virtual image (101) of the guidewire, including a portion of the non- shape-sensed guidewire that does not extend along a shape-sensing optical fiber.
Abstract:
A system and method for registering a structure (104) of a subject (103) to a coordinate system which includes a deformable registration device (102) having a deformable body (105) that features an optical fiber (108). A FORS system (110) measures a shape of the optical fiber when the deformable body contacts the structure of the subject to obtain positional points (106) for the structure. A pre-processing module (112) analyzes the positional points and determines which positional points are on-surface and off-surface points with respect to the structure. A registration module (118) deletes the off-surface positional points and registers the device to the structure using the on-surface positional points. Positional points (142) from a shape (137) generated on a model which approximates points that would be measured by the deformable registration device of the structure may be combined with the positional points (106) acquired from the deformable body to provide an improved registration.
Abstract:
An ultrasound device (10) comprises a probe (12) including a tube (14) sized for in vivo insertion into a patient and an ultrasound transducer (18) disposed at a distal end (16) of the tube. A camera (20) is mounted at the distal end of the tube in a spatial relationship to the ultrasound transducer. At least one electronic processor (28) is programmed to: control the ultrasound transducer and the camera to acquire ultrasound images (19) and camera images (21) respectively while the ultrasound transducer is disposed in vivo; construct keyframes (36) during in vivo movement of the ultrasound transducer, each keyframe representing an in vivo position of the ultrasound transducer and including at least ultrasound image features (38) extracted from at least one of the ultrasound images acquired at the in vivo position of the ultrasound transducer and camera image features (40) extracted from at least one of the camera images acquired at the in vivo position of the ultrasound transducer; generate a navigation map (45) of the in vivo movement of the ultrasound transducer comprising the keyframes; and output navigational guidance (49) based on comparison of current ultrasound and camera images acquired by the ultrasound transducer and camera with the navigation map.
Abstract:
A controller (122) includes a memory (12220) that stores instructions and a processor (12210) that executes the instructions. When executed, the instructions cause the controller (122) to implement a process that includes obtaining (S405) pre-operative imagery of the tissue in a first modality, registering (S425) the pre-operative imagery of the tissue in the first modality with a set of sensors (195 - 199) adhered to the tissue, and receiving (S435), from the set of sensors (195 - 199), sets of electronic signals for positions of the set of sensors (195 - 199). The process also includes computing (S440) geometry of the positions of the set of sensors (195 - 199) for each set of the sets of electronic signals and computing (S450) movement of the set of sensors (195 - 199) based on changes in the geometry of the positions of the set of sensors (195 - 199) between sets of electronic signals from the set of sensors (195 - 199). The pre-operative imagery is updated to reflect changes in the tissue based on movement of the set of sensors (195 - 199).
Abstract:
A registration system for medical navigation includes a shape sensing device (SSD) (104, 504) having at least one sensor (450, 505) for providing corresponding sensor information (SI) indicative of at least one of a position of the at least one sensor (450, 505); a registration fixture (106) having a channel (130) configured to receive at least part of the SSD and defining a registration path (P). The registration fixture may be configured to be attached to a registrant object (RO) (119) defining a workspace. A controller (110) may be configured to: sense a shape of a path traversed by the SSD based upon the SI when the at least one sensor is situated within the channel (130), determine whether the sensed shape of the path corresponds with a known shape selected from one or more known shapes, and perform a coordinate registration based upon the determination.
Abstract:
A triggering device includes an optical fiber (126) configured for optical shape sensing. A supporting element (104) is configured to support a portion of the optical fiber. An interface element (106) is configured to interact with the optical fiber associated with the supporting element to cause a change in a property of the fiber. A sensing module (115) is configured to interpret an optical signal to determine changes in the property of the fiber and accordingly generate a corresponding trigger signal.
Abstract:
The following relates generally to systems and methods of transesophageal echocardiography (TEE) automation. Some aspects relate to a TEE probe with ultrasonic transducers on a distal end of the TEE probe. In other aspects, the described techniques are employed in intracardiac echo (ICE) probes, intravascular ultrasound (IVUS) probes, or the like. In some implementations, probe behavior states are identified based on probe gear motion, image motion, and measured tissue elasticity. Identified behavior states are displayed on a user interface, and probe control is performed based on the current behavior state.
Abstract:
Various embodiments of the present disclosure encompass manipulative endoscopic guidance device employing an endoscopic viewing controller (20) for controlling a display of an endoscopic view (11) of an anatomical structure, and a manipulative guidance controller (30) for controlling a display of one or more guided manipulation anchors (50-52) within the display of the endoscopic view (11) of the anatomical structure. A guided manipulation anchor (50-52) is representative of a location marking and/or a motion directive of a guided manipulation of the anatomical structure (e.g., grasping, pulling, pushing, sliding, reorienting, tilting, removing, or repositioning of the anatomical structure). The manipulative guidance controller (30) may generate an anchor by analyzing a correlation of the endoscopic view (11) of the anatomical structure with a knowledge base of image(s), model(s) and/or details corresponding to the anatomical structure and by deriving the anchor based on a degree of correlation of the endoscopic view (11) of the anatomical structure with the knowledge base.