Abstract:
Methods for guiding a surgical procedure include accessing information relating to a surgical procedure, accessing at least one image of a surgical site captured by an endoscope during the surgical procedure, identifying a tool captured in the at least one image by a machine learning system, determining whether the tool should be changed based on comparing the information relating to the surgical procedure and the tool identified by the machine learning system, and providing an indication when the determining indicates that the tool should be changed.
Abstract:
A surgical robotic system includes a surgical console having a display and a user input device configured to generate a user input and a surgical robotic arm, which includes a surgical instrument configured to treat tissue and being actuatable in response to the user input and a video camera configured to capture video data that is displayed on the display. The system also includes a control tower coupled to the surgical console and the surgical robotic arm. The control tower is configured to process the user input to control the surgical instrument and to record the user input as input data; communicate the input data and the video data to at least one machine learning system configured to generate a surgical process evaluator; and execute the surgical process evaluator to determine whether the surgical instrument is properly positioned relative to the tissue.
Abstract:
A surgical robotic system includes: a surgical console having a display and a user input device configured to generate a user input and a surgical robotic arm having a surgical instrument configured to treat tissue and being actuatable in response to the user input; and a video camera configured to capture video data that is displayed on the display. The system also includes a control tower coupled to the surgical console and the surgical robotic arm. The control tower is configured to: process the user input to control the surgical instrument and to record the user input as input data; train a machine learning system using the input data and the video data; and execute the at least one machine learning system to determine probability of failure of the surgical instrument.
Abstract:
Provided in accordance with embodiments of the present disclosure are systems, methods, and computer-readable media for providing stereoscopic visual perception notifications and/or recommendations during a robotic surgical procedure. An exemplary method includes receiving a right-eye view image captured by way of a right-eye lens of a patient image capture device disposed at a surgical site, receiving a left-eye view image captured by way of a left-eye lens of the patient image capture device, analyzing the right-eye view and left-eye view images, determining, based on a result of the analyzing, whether the right-eye view image or the left-eye view image includes a characteristic, generating a stereoscopic visual perception notification, when it is determined that the right-eye view image or the left-eye view image includes the characteristic, and displaying a stereoscopic image based on the right-eye view image and the left-eye view image, the stereoscopic image including the stereoscopic visual perception notification.
Abstract:
An endoscope includes a tube and a pair of image sensors. The tube includes a proximal portion, and a distal portion pivotably coupled to the proximal portion. The distal portion defines a longitudinal axis. The image sensors are disposed in a linear array along the longitudinal axis defined by the distal portion.
Abstract:
A controller includes an attachment member and an interface. The attachment member is configured to secure the controller to a hand of a clinician. The interface includes a first controller that is configured to operate an imaging device of a surgical system. The controller can secure about a finger of a clinician and be engagable by a thumb of the same hand of the clinician. Alternatively, the controller can be disposed in a palm of a hand of a clinician and engaged by a finger of the hand.
Abstract:
The present disclosure is directed to an augmented reality surgical system. The system includes an endoscope that captures an image of the region of interest of a patient and an ECG device that records an ECG of the patient. A controller receives the image and applies at least one image processing filter to the image. The image processing filter includes a decomposition filter that decomposes the image into frequency bands. A temporal filter is applied to the frequency bands to generate temporally filtered bands. An adder adds each band frequency band to a corresponding temporally filtered band to generate augmented bands. A reconstruction filter generates an augmented image by collapsing the augmented bands. The controller also receives the ECG and processes the augmented image with the ECG to generate an ECG filtered augmented image. A display displays the ECG filtered augmented image to a user.
Abstract:
The present disclosure is directed to an augmented reality head mounted device worn by a user. The device includes an image capture device configured to capture an image of a surgical environment and a transparent lens configured to display an augmented image based on the image of the surgical environment. An eye tracking module coupled to the transparent lens configured to determine a direction of a gaze of an eye of the user, wherein the direction of the gaze of the eye determined by the eye tracking module is used to manipulate the augmented image.
Abstract:
A robotic surgical system includes an access port having a plurality of sensors incorporated therein and a surgical instrument having an end effector for use with, and connection to a robot arm. The sensors measure a loading parameter between tissue and the access port when the shaft is inserted through the access port. The sensors wirelessly transmit signals indicative of the loading parameter to a processing unit. The estimated loads at the tip of the end effector of the surgical instrument are provided in real-time to a haptic feedback interface for communicating haptic feedback to a user. The haptic feedback increases or decreases in intensity in response to a specific event, occurrence, or operational characteristic related to the estimated loads.