Abstract:
A robotic control system is placed in clutch mode so that a slave manipulator holding a surgical instrument is temporarily disengaged from control by a master manipulator in order to allow manual positioning of the surgical instrument at a surgical site within a patient. Control systems implemented in a processor compensate for internally generated frictional and inertial resistance experienced during the positioning, thereby making movement more comfortable to the mover, and stabler from a control standpoint. Each control system drives a joint motor in the slave manipulator with a saturated torque command signal which has been generated to compensate for non-linear viscous forces, coulomb friction, cogging effects, and inertia forces subjected to the joint, using estimated joint angular velocities, accelerations and externally applied torques generated by an observer in the control system from sampled displacement measurements received from a sensor associated with the joint.
Abstract:
In a robotic endoscope system, the orientation of a captured camera view at a distal tip of a robotic endoscope and displayed on a screen viewable by an operator of the endoscope is automatically maintained at a roll orientation associated with a setpoint so as not to disorient the operator as the endoscope is moved, flexed and its tip turned in different orientations. A processor generates a current commanded state of the tip from operator input and modifies it to maintain the setpoint roll orientation. To generate the modified current commanded state, the current commanded roll position and velocity are constrained to be a modified current commanded roll position and velocity that have been modified according to a roll angular adjustment indicated by a prior process period commanded state of the tip and the setpoint. The processor then commands the robotic endoscope to be driven to the modified commanded state.
Abstract:
A system comprises a manually operated instrument, a reference fixture positioned in a fixed world reference frame, and a shape sensor coupled to the manually operated instrument. The shape sensor extends from the manually operated instrument to the reference fixture. The system further comprises a controller configured to receive, from the shape sensor, shape information for the manually operated instrument. The controller is further configured to determine a pose of the manually operated instrument in the fixed world reference frame based on the shape information for the manually operated instrument.
Abstract:
A surgical access port comprises an instrument guide. The instrument guide comprises a proximal end, a distal end, a plurality of instrument guide channels between the proximal and distal ends, and an outside surface that fits closely to an inner wall surface of a cannula into which the instrument guide is inserted. The instrument guide also comprises a first guide channel opening defined along a length of a first guide channel of the plurality of instrument guide channels. The instrument guide also comprises a second guide channel opening defined along a length of a second guide channel of the plurality of instrument guide channels. The instrument guide also comprises an insufflation channel defined between the first and second guide channel openings. The insufflation channel extends from a position between the proximal and distal ends of the instrument guide to the distal end of the instrument guide.
Abstract:
A system includes a control device, a manipulator configured to support a tool having a tool frame; and at least one processor coupled to the control device and the manipulator. The at least one processor configured to perform a method. The method includes receiving one or more images captured by an image-capturing system, the image-capturing system having an image frame. The tool is visible in the one or more images. The method also includes determining an estimated frame transform based on the one or more images. The estimated frame transform is used in defining an unknown frame transform between the image frame and the tool frame. The method also includes determining, in response to an input received at the control device, an output movement for the tool based on the estimated frame transform. The method also includes causing movement of the tool based on the output movement.
Abstract:
A mechanical interface for a robotic medical instrument permits engagement of the instrument and a drive system without causing movement of an actuated portion of the instrument. An instrument interface can include a symmetrical, tapered or cylindrical projection on one of a medical instrument or a drive system and a complementary bore in the other of the drive system or the medical instrument. Symmetry of the projection and the bore allows the projection to be compression fit to the bore regardless of the rotation angle of the drive system relative to the medical instrument.
Abstract:
A controller operates in different operating modes to control movement of a distal tip of a medical instrument when inserting and retracting the medical instrument through linked body passages. When inserting the medical instrument, the controller normally operates in an automatic navigation mode unless manually overridden to operate in a manual mode. When retracting the medical instrument, the controller normally operates in a zero-force mode to allow the distal tip to freely move so that it may comply with the shape of the passages as the medical instrument is being retracted through the linked body unless manually overridden to operate in a manual mode.
Abstract:
Vision systems on catheters, cannulas, or similar devices with guiding lumens include receptors distributed in annular areas around respective lumens. Each of the receptors has a field of view covering only a portion of an object environment, and the field of view of each of the receptors overlaps with at least one of the fields of view of the other receptors. A processing system can receive image data from the receptors of the vision systems and combine the image data to construct a visual representation of the object environment.
Abstract:
Information extracted from sequential images captured from the perspective of a distal end of a medical device moving through an anatomical structure are compared with corresponding information extracted from a computer model of the anatomical structure. A most likely match between the information extracted from the sequential images and the corresponding information extracted from the computer model is then determined using probabilities associated with a set of potential matches so as to register the computer model of the anatomical structure to the medical device and thereby determine the lumen of the anatomical structure which the medical device is currently in. Sensor information may be used to limit the set of potential matches. Feature attributes associated with the sequence of images and the set of potential matches may be quantitatively compared as part of the determination of the most likely match.
Abstract:
Methods and system perform tool tracking during minimally invasive robotic surgery. Tool states are determined. using triangulation techniques or a Bayesian filter from either or both non-endoscopically derived and endoscopically derived tool state information, or from either or both non-visually derived and visually derived tool state information. The non-endoscopically derived tool state information is derived from sensor data provided either by sensors associated with a mechanism for manipulating the tool, or sensors capable of detecting identifiable signals emanating or reflecting from the tool and indicative of its position, of external cameras viewing an end of the tool extending out of the body. The endoscopically derived tool state information is derived from image data provided by an endoscope inserted in the body so as to view the tool.