INTERACTIVE ENDOSCOPY FOR INTRAOPERATIVE VIRTUAL ANNOTATION IN VATS AND MINIMALLY INVASIVE SURGERY

    公开(公告)号:US20220358773A1

    公开(公告)日:2022-11-10

    申请号:US17641940

    申请日:2020-09-11

    Abstract: A controller (522) for live annotation of interventional imagery includes a memory (52220) that stores software instructions and a processor (52210) that executes the software instructions. When executed by the processor (52210), the software instructions cause the controller (522) to implement a process that includes receiving (S210) interventional imagery during an intraoperative intervention and automatically analyzing (S220) the interventional imagery for detectable features. The process executed when the processor (52210) executes the software instructions also includes detecting (S230) a detectable feature and determining (S240) at add an annotation to the interventional imagery for the detectable feature. The processor further includes identifying (S250) a location for the annotation as an identified location in the interventional imagery and adding (S260) the annotation to the interventional imagery at the identified location to correspond to the detectable feature. During the intraoperative intervention, a video is output (S270) as video output based on interventional imagery and the annotation, including the annotation overlaid on the interventional imagery at the identified location.

    SYSTEMS AND METHODS FOR GUIDING AN ULTRASOUND PROBE

    公开(公告)号:US20230010773A1

    公开(公告)日:2023-01-12

    申请号:US17783370

    申请日:2020-12-04

    Abstract: An ultrasound device (10) comprises a probe (12) including a tube (14) sized for in vivo insertion into a patient and an ultrasound transducer (18) disposed at a distal end (16) of the tube. A camera (20) is mounted at the distal end of the tube in a spatial relationship to the ultrasound transducer. At least one electronic processor (28) is programmed to: control the ultrasound transducer and the camera to acquire ultrasound images (19) and camera images (21) respectively while the ultrasound transducer is disposed in vivo; construct keyframes (36) during in vivo movement of the ultrasound transducer, each keyframe representing an in vivo position of the ultrasound transducer and including at least ultrasound image features (38) extracted from at least one of the ultrasound images acquired at the in vivo position of the ultrasound transducer and camera image features (40) extracted from at least one of the camera images acquired at the in vivo position of the ultrasound transducer; generate a navigation map (45) of the in vivo movement of the ultrasound transducer comprising the keyframes; and output navigational guidance (49) based on comparison of current ultrasound and camera images acquired by the ultrasound transducer and camera with the navigation map.

Patent Agency Ranking