Abstract:
A method for enabling distributed compilation of a multi-layered ultrasound data record for a patient, through a system having multiple input devices at different locations, each having access to a database, and each able to contribute to an ultrasound data record for a patient in accordance with a certain privilege or access status.
Abstract:
A graphical ultrasound user evaluation tool is described. The evaluation tool employs a predictive model and log files recorded by the ultrasound scanner to determine one or more ultrasound user performance scores. The log files may be processed to extract actual (or recorded) performance metrics from the information (e.g., timed events such as button clicks) recorded in the log file, which are compared against predicted (or expected) metrics to determine the one or more performance scores. The predicted metrics may be obtained from a predictive model, which may be implemented by an analytical (e.g., regression or other) model or by a trained neural network. The ultrasound user performance scores are then graphically presented in a user-friendly manner, e.g., on a graphical dashboard which can provide a summary screen and further detailed reports or screens, responsive to user input, and/or update the scores based on comparison with a user-specified ultrasound user experience level.
Abstract:
A steerable multi-plane ultrasound imaging system (MPUIS) for steering a plurality of intersecting image planes (PL 1..n ) of a beamforming ultrasound imaging probe (BUIP) based on ultrasound signals transmitted between the beamforming ultrasound imaging probe (BUIP) and an ultrasound transducer (S) disposed within a field of view (FOV) of the probe (BUIP). An ultrasound tracking system (UTS) causes the beamforming ultrasound imaging probe (BUIP) to adjust an orientation of the first image plane (PL 1 ) such that a first image plane passes through a position (POS) of the ultrasound transducer (S) by maximizing a magnitude of ultrasound signals transmitted between the beamforming ultrasound imaging probe (BUIP) and the ultrasound transducer (S). An orientation of a second image plane (PL 2 ) is adjusted such that an intersection (AZ) between the first image plane and the second image plane passes through the position of the ultrasound transducer (S).
Abstract:
A controller (250) for differentiating passive ultrasound sensors for interventional medical procedures includes a memory (291) and a processor (292). When executed by the processor (292), instructions from the memory (291) cause a system (200) that includes the controller (250) to implement a process that includes receiving first signals from a first passive ultrasound sensor (S1) and receiving second signals from a second passive ultrasound sensor (S2). The first signals and second signals are generated by the passive ultrasound sensors responsive to beams emitted from an ultrasound imaging probe (210). The process also includes identifying a characteristic of the first signals and the second signals. The characteristic includes shapes of the first signals and the second signals and/or times at which the first signals and the second signals are generated as the beams from the ultrasound imaging probe are received. The first passive ultrasound sensor (S1) and the second passive ultrasound sensor (S2) are differentiated based on the characteristic.
Abstract:
The invention provides a method for guiding the acquisition of ultrasound data within a 3D field of view. The method begins by obtaining initial 2D B-mode ultrasound data of a cranial region of a subject from a reduced field of view at a first imaging location and determining whether a vessel of interest is located within the 3D field of view based on the initial 2D B-mode ultrasound data. If the vessel of interest is not located within the 3D field of view, a guidance instruction is generated based on the initial 2D B-mode ultrasound data, wherein the guidance instruction is adapted to indicate a second imaging location to obtain further ultrasound data. If the vessel of interest is located within the 3D field of view 3D Doppler ultrasound data is obtained of the cranial region from the 3D field of view.
Abstract:
An acoustic imaging apparatus and method: produce acoustic images of an area of interest in response to one or more receive signals received from an acoustic probe in response to acoustic echoes received by the acoustic probe from the area of interest; 5 identify one or more candidate locations for a passive sensor disposed on a surface of an intervention device in the area of interest based on magnitudes of the acoustic echoes received by the acoustic probe from the candidate locations in the area of interest; use intra-procedural context-specific information to identify a one of the candidate locations which best matches the intra-procedural context-specific information as the estimated 10 location of the passive sensor; displaying the acoustic images on a display device; and display on the display device a marker in the acoustic images to indicate the estimated location of the passive sensor.
Abstract:
A controller includes a memory that stores instructions, and a processor that executes the instructions. When executed by the processor, the instructions cause the controller to execute a process that includes controlling an imaging probe. The imaging probe is controlled to activate imaging elements to emit imaging signals to generate three or more imaging planes, to simultaneously capture an interventional device and anatomy targeted by the interventional device. The imaging probe is also controlled to simultaneously capture both the interventional device and the anatomy targeted by the interventional device. The imaging probe is controlled to capture at least one of the interventional device and the anatomy targeted by the interventional device in at least two of the three or more imaging planes, and to capture the other of the interventional device and the anatomy targeted by the interventional device in at least one of the three or more imaging planes.
Abstract:
A multi-patch ultrasound array according to the present disclosure includes a plurality of patches of transducer elements, the plurality of patches arranged along a first direction of the array, and a frame connecting the plurality of patches such that adjacent ones of the plurality of patches are slidable relative to one another in a second direction of the array which is perpendicular to the first direction. An ultrasound system according to the present disclosure includes a multi-patch ultrasonic array comprising a plurality of ultrasound patches, each individually operable to transmit ultrasound toward a region of interest and receive echoes from the region of interest, wherein each patch is slidable in relation to an adjacent patch, an ultrasound scanner configured to generate an ultrasound image from the received echoes, and a communication link configured to selectively couple each of the plurality of patches to the ultrasound scanner.
Abstract:
A system (100) and method (190) for determining the reliability of an ultrasonic tracking device (102) includes a determination device (132) that is configured to receive signals from the ultrasonic tracking device and determine a quantity of sensors (105) of an interventional device (103) in a field of view of the ultrasonic imaging device. An evaluation device (136) correlates a quantity of the sensors in the field of view with a reliability level for the determined orientation of the interventional device and generates a control signal (142) to a feedback device. The feedback device provides feedback to the user concerning the reliability level for the orientation of the interventional device determined by the ultrasonic tracking device. The feedback may be visual and/or audible feedback.
Abstract:
An ultrasound system for performing cancer grade mapping includes an ultrasound imaging device (10) that acquires ultrasound imaging data. An electronic data processing device (30) is programmed to generate an ultrasound image (34) from the ultrasound imaging data, and to generate a cancer grade map (42) by (i) extracting sets of local features from the ultrasound imaging data that represent map pixels of the cancer grade map and (ii) classifying the sets of local features using a cancer grading classifier (46) to generate cancer grades for the map pixels of the cancer grade map. A display component (20) displays the cancer grade map, for example overlaid on the ultrasound image as a color-coded cancer grade map overlay. The cancer grading classifier is learned from a training data set (64) comprising sets of local features extracted from ultrasound imaging data at biopsy locations and labeled with histopathology cancer grades.