Abstract:
A computer-implemented method of object enhancement in endoscopy images is presented. The computer-implemented method includes capturing an image of an object within a surgical operative site, by an imaging device. The image includes a plurality of pixels. Each of the plurality of pixels includes color information. The computer-implemented method further includes accessing the image, accessing data relating to depth information about each of the pixels in the image, inputting the depth information to a machine learning algorithm, emphasizing a feature of the image based on an output of the neural network, generating an augmented image based on the emphasized feature, and displaying the augmented image on a display.
Abstract:
Systems, methods, and computer-readable media are provided for robotic surgical device control. A system is provided including a robotic arm, an autosteroscopic display, a user image capture device, an image processor, and a controller. The robotic arm is coupled to a patient image capture device. The autostereoscopic display is configured to display an image of a surgical site obtained from the patient image capture device. The image processor configured to identify a location of at least part of a user in an image obtained from the user image capture device. The controller is configured to, in a first mode, adjust a three dimensional aspect of the image displayed on autostereoscopic display based on the identified location, and, in a second mode, move the robotic arm or instrument based on a relationship between the identified location and the surgical site image.
Abstract:
A surgical robotic system includes a control tower, a mobile cart, and a surgical console. The mobile cart is coupled to the control tower and includes a surgical robotic arm. The surgical robotic arm includes a surgical instrument and an image capture device. The surgical instrument is actuatable in response to a user input and configured to treat a target tissue in real-time. The image capture device for capturing at least one of images or video of the target tissue in real-time. The surgical console includes a user input device for generating a user input, and a controller. The controller is operably coupled to the user input device and configured to switch, based on the user input, from a first mode to a second mode, and from the second mode to the first mode.
Abstract:
A method for mapping and fusing endoscopy images includes capturing a first image of an object within a surgical operative site, by a first imaging device; and capturing a second image of the object, by a second imaging device. The first image includes the first light radiating from the object, and a first reference point. The second image includes the second light radiating from the object, and a second reference point. The method further includes comparing a first location of the first reference point in the first image to a second location of the second reference point in the second image, determining a relative pose of the first imaging device to the second imaging device based on the comparing, generating an augmented image fusing the first image and the second image based on the determined relative pose, and displaying the augmented image on a display.
Abstract:
A surgical cart includes a vertically-extending support column, a carriage movably coupled to the support column for carrying a robotic arm, and a damper assembly for dampening vibrations induced in the surgical cart.
Abstract:
A system for detecting subsurface blood in a region of interest during a surgical procedure includes an image capture device that captures an image stream of the region of interest and a light source that illuminates the region of interest. A controller applies at least one image processing filter to the image stream, which decomposes the image stream into a plurality of color space frequency bands, generate a plurality of color filtered bands from the plurality of color space frequency bands, adds each band in the plurality of color space frequency bands to a corresponding band in the plurality of color filtered bands to generate a plurality of augmented bands, and a reconstruction filter that generates the augmented image stream from the plurality of augmented bands, which is displayed to a user during the surgical procedure.
Abstract:
An endoscope includes a tube and a pair of image sensors. The tube includes a proximal portion, and a distal portion pivotably coupled to the proximal portion. The distal portion defines a longitudinal axis. The image sensors are disposed in a linear array along the longitudinal axis defined by the distal portion.
Abstract:
A controller includes an attachment member and an interface. The attachment member is configured to secure the controller to a hand of a clinician. The interface includes a first controller that is configured to operate an imaging device of a surgical system. The controller can secure about a finger of a clinician and be engagable by a thumb of the same hand of the clinician. Alternatively, the controller can be disposed in a palm of a hand of a clinician and engaged by a finger of the hand.
Abstract:
The present disclosure is directed to an augmented reality surgical system. The system includes an endoscope that captures an image of the region of interest of a patient and an ECG device that records an ECG of the patient. A controller receives the image and applies at least one image processing filter to the image. The image processing filter includes a decomposition filter that decomposes the image into frequency bands. A temporal filter is applied to the frequency bands to generate temporally filtered bands. An adder adds each band frequency band to a corresponding temporally filtered band to generate augmented bands. A reconstruction filter generates an augmented image by collapsing the augmented bands. The controller also receives the ECG and processes the augmented image with the ECG to generate an ECG filtered augmented image. A display displays the ECG filtered augmented image to a user.
Abstract:
Robotic Surgical Systems and methods of controlling robotic surgical systems are disclosed herein. One disclosed method includes visually capturing a tool pose of a tool within a surgical site with an imaging device in a fixed frame of reference, determining an arm pose of a linkage supporting the tool from known geometries of the linkage in the fixed frame of reference, and manipulating the linkage to move the tool to a desired tool pose in response to a control signal in the fixed frame of reference.