Abstract:
A system and method for providing image guidance for placement of one or more medical devices at a target location. The system can determine one or more intersections between a medical device and an image region based at least in part on first emplacement data and second emplacement data. Using the determined intersections, the system can cause one or more displays to display perspective views of image guidance cues, including an intersection indicator in a virtual 3D space.
Abstract:
A system and method for image guidance providing improved perception of a display object in a rendered scene for medical device navigation. The system can receive emplacement information associated with a medical device and determine an emplacement of a display object associated with the medical device. The system can further identify a selected surface of the display object and cause a display to display a selective-transparency rendering of the selected surface.
Abstract:
A system and method for providing image guidance for planning approach paths for one or more medical devices at a target location. The system can receive a volumetric medical image, determine a density of content within the volumetric medical image, receive an indication of a target location within the volumetric medical image, identify obstructing objects within the volumetric medical image, and determine a plurality of pathways from an approach region of the volumetric medical image to the target location. The system can cause a display to concurrently display the volumetric medical image and a plurality of emplacements for one or more medical imaging devices.
Abstract:
Presented herein are methods, systems, devices, and computer-readable media for image guided surgery. The systems herein allow a physician to use multiple instruments for a surgery and simultaneously provide image-guidance data for those instruments. Various embodiments disclosed herein provide information to physicians about procedures they are performing, the devices (such as ablation needles, ultrasound wands or probes, scalpels, cauterizers, etc.) they are using during the procedure, the relative emplacements or poses of these devices, prediction information for those devices, and other information. Some embodiments provide useful information about 3D data sets. Additionally, some embodiments provide for quickly calibratable surgical instruments or attachments for surgical instruments.
Abstract:
A system and method for providing image guidance for placement of one or more medical devices at a target location. The system can be used to determine one or more affected regions corresponding to the operation of one or more medical devices and display at least a portion of the one or more affected regions. The affected regions can correspond to predicted affected regions and/or dynamic affected regions and can be based at least in part on a variance parameter of the medical device.
Abstract:
Presented herein are methods, systems, devices, and computer-readable media for image annotation in image-guided medical procedures. Some embodiments herein allow physicians or other operators to use one or more medical devices in order to define annotations in 3D space. These annotations may later be displayed to the physician or operator in 3D space in the position in which they were first drawn or otherwise generated. In some embodiments, the operator may use various available medical devices, such as needles, scalpels, or even a finger in order to define the annotation. Embodiments herein may allow an operator to more conveniently and efficiently annotate visualizable medical data.
Abstract:
A system and method for providing image guidance for placement of one or more medical devices at a target location. The system can be used to display portions of a display object, such as a rendered medical image, at different transparency levels. The system can also be used to resolve co-located display objects, such as a co-located image guidance cue and rendered medical image. The system can further be used to adjust a point-of-view location for one or more medical display objects within a virtual 3D space.
Abstract:
Presented herein are methods, systems, and computer-readable medium for presenting imaging data related to an anatomical site. These include obtaining a first set of imaging data related to the anatomical site and tracking units at the anatomical site and, thereafter, optionally, obtaining a second set of imaging data related to the anatomical site. A deformed version of the first set of imaging data is then determined based on the relative arrangements of one or more of the tracking units at the time when the first set of imaging data is obtained and when the second set of imaging data is obtained. Then the relative emplacements of the second set of imaging data set and of the deformed version of the first set of imaging data set are determined and used, along with the second set of imaging data set and the deformed version of the first set of imaging data, as a basis for displaying image guidance data.
Abstract:
A system and method for providing image guidance for placement of one or more medical devices at a target location. The system receives emplacement information of medical devices within a predetermined area. The system calculates a viewing angle in a virtual 3D space of a plurality of virtual medical devices corresponding to the plurality of medical devices. The system also causes a display device to display the plurality of virtual medical devices based at least on the calculated viewing angle.
Abstract:
A system and method of providing composite real-time dynamic imagery of a medical procedure site from multiple modalities which continuously and immediately depicts the current state and condition of the medical procedure site synchronously with respect to each modality and without undue latency is disclosed. The composite real-time dynamic imagery may be provided by spatially registering multiple real-time dynamic video streams from the multiple modalities to each other. Spatially registering the multiple real-time dynamic video streams to each other may provide a continuous and immediate depiction of the medical procedure site with an unobstructed and detailed view of a region of interest at the medical procedure site at multiple depths. As such, a surgeon, or other medical practitioner, may view a single, accurate, and current composite real-time dynamic imagery of a region of interest at the medical procedure site as he/she performs a medical procedure, and thereby, may properly and effectively implement the medical procedure.