Abstract:
Methods, systems, and computer-readable media for detecting image degradation during a surgical procedure are provided. A method includes receiving images of a surgical instrument; obtaining baseline images of an edge of the surgical instrument; comparing a characteristic of the images of the surgical instrument to a characteristic of the baseline images of the edge of the surgical instrument, the images of the surgical instrument being received subsequent to obtaining the baseline images of the edge of the surgical instrument and being received while the surgical instrument is disposed at a surgical site in a patient; determining whether the images of the surgical instrument are degraded, based on the comparing of the characteristic of the images of the surgical instrument and the characteristic of the baseline images of the surgical instrument; and generating an image degradation notification, in response to a determination that the images of the surgical instrument are degraded.
Abstract:
A method of placing a surgical robotic cart assembly includes, determining a first position of a first surgical robotic cart assembly relative to a surgical table, calculating a path for the first surgical robotic cart assembly towards a second position of the first surgical robotic cart assembly relative to the surgical table, wherein in the second position, the first surgical robotic cart assembly is spaced-apart a first safe distance from the surgical table, moving the first surgical robotic cart assembly autonomously towards the second position thereof, and detecting a potential collision along the path of the first surgical robotic cart assembly as the first surgical robotic cart assembly moves towards the second position thereof.
Abstract:
Robotic Surgical Systems and methods of controlling robotic surgical systems are disclosed herein. One disclosed method includes visually capturing a tool pose of a tool within a surgical site with an imaging device in a fixed frame of reference, determining an arm pose of a linkage supporting the tool from known geometries of the linkage in the fixed frame of reference, and manipulating the linkage to move the tool to a desired tool pose in response to a control signal in the fixed frame of reference.
Abstract:
A surgical imaging system includes a camera, a first imager, and a processing unit. The camera is configured to capture optical images of a surgical site along a first optical path. The first imager is configured to capture first functional images of the surgical site along a second path that is separate from the first optical path. The processing unit is configured to generate a combined view of the surgical site from the captured first functional images and the captured optical images and to transmit the combined view to a display.
Abstract:
A system for enhancing an image during a surgical procedure includes an image capture device configured to be inserted into a patient and capture an image inside the patient. The system also includes a controller that applies at least one image processing filter to the image to generate an enhanced image. The image processing filter includes a spatial decomposition filter that decomposes the image into a plurality of spatial frequency bands, a frequency filter that filters the plurality of spatial frequency bands to generate a plurality of filtered enhanced bands, and a recombination filter that generates the enhanced image to be displayed by a display.
Abstract:
Disclosed are systems, devices, and methods for training a user of a robotic surgical system including a surgical robot using a virtual or augmented reality interface, an example method comprising localizing a three-dimensional (3D) model of the surgical robot relative to the interface, displaying or using the aligned view of the 3D model of the surgical robot using the virtual or augmented reality interface, continuously sampling a position and orientation of a head of the user as the head of the user is moved, and updating the pose of the 3D model of the surgical robot based on the sampled position and orientation of the head of the user.
Abstract:
A method for object detection in endoscopy images includes capturing an image of an object, by an imaging device, the image including a first light and a second light. The object includes an infrared (IR) marking. The method further includes accessing the image, performing real time image recognition on the image to detect the IR marking, performing real time image recognition on the image to detect the object and classify the object, based on the IR marking, generating an augmented image based on removing the IR marking from the image, and displaying the augmented image on a display.
Abstract:
Systems and methods are provided to mitigate potential collisions between a person and robotic system. In various embodiments, a robotic surgical system includes a robotic linkage including joints, an endoscope coupled to a distal portion of the robotic linkage and configured to capture stereoscopic images, and a controller in communication with the endoscope. The controller executes instructions to analyze the stereoscopic images from the endoscope to identify a human-held tool in the stereoscopic images and to estimate a type and/or pose of the human-held tool, infer a position of a person holding the human-held tool based on the type and/or pose of the human-held tool, determine a spatial relationship between the person and the robotic linkage based on the inferred position of the person, and generate a warning of potential collision between the person and the robotic linkage based on the determined spatial relationship.
Abstract:
A computer-implemented method of object enhancement in endoscopy images is presented. The computer-implemented method includes capturing an image of an object within a surgical operative site, by an imaging device. The image includes a plurality of pixels. Each of the plurality of pixels includes color information. The computer-implemented method further includes accessing the image, accessing data relating to depth information about each of the pixels in the image, inputting the depth information to a machine learning algorithm, emphasizing a feature of the image based on an output of the neural network, generating an augmented image based on the emphasized feature, and displaying the augmented image on a display.
Abstract:
The present disclosure is directed to a robotic surgical system and a corresponding method. The system includes at least one robot arm and a radiation source coupled to the robot arm. The system also includes a surgical table having a digital imaging receiver configured to output an electrical signal based on radiation received from the radiation source. A controller having a processor and a memory is configured to receive the electrical signal and generate an initial image of a patient on the surgical table based on the electrical signal. The controller transforms the initial image to a transformed image based on an orientation of the radiation source.