Abstract:
A system and method for an articulated arm based tool guide includes an elongated body having a guide hole, a first joint attached to a first end of the body, a second joint attached to a second end of the body opposite the first end, a first mounting arm coupled to the body using the first joint, and a second mounting arm coupled to the body using the second joint. The first mounting arm is configured to be attached to a first articulated arm of a computer-assisted medical device. The second mounting arm is configured to be attached to a second articulated arm of the computer-assisted medical device. The guide hole is adapted to receive a medical tool and maintain a working end of the medical tool in alignment with the guide. In some embodiments, the first mounting arm includes identifying information identifying the tool guide as a tool guide.
Abstract:
A system and method for movement control includes a controller coupled to a computer-assisted surgical device having a first movable arm coupled to a manipulatable device having a working end and a second movable arm coupled to an image capturing device. The controller is configured to receive first configurations for the first movable arm; receive second configurations for the second movable arm; receive a plurality of images of the working end from the image capturing device; determine a position and an orientation of the working end; determine a first movable arm position and trajectory for the first movable arm; determine a second movable arm position and trajectory for the second movable arm; determine whether motion of the movable arms will result in an undesirable relationship between the movable arms; and send a movement command to the first or second movable arm to avoid the undesirable relationship.
Abstract:
A system and method for movement control includes a controller coupled to a computer-assisted surgical device having a first movable arm coupled to a manipulatable device having a working end and a second movable arm coupled to an image capturing device. The controller is configured to receive first configurations for the first movable arm; receive second configurations for the second movable arm; receive a plurality of images of the working end from the image capturing device; determine a position and an orientation of the working end; determine a first movable arm position and trajectory for the first movable arm; determine a second movable arm position and trajectory for the second movable arm; determine whether motion of the movable arms will result in an undesirable relationship between the movable arms; and send a movement command to the first or second movable arm to avoid the undesirable relationship.
Abstract:
A system may access medical session data for a medical session and determine, based on the medical session data, that an event occurs within a region of interest associated with the medical session. The medical session includes performance of one or more operations by a computer-assisted medical system. The system may identify a physical location within the region of interest and associated with the event, identify content captured by an imaging device and depicting the physical location when the event occurred, and associate the content with the physical location. After the event occurs, the system may provide a user with access to the content when the physical location is within a field of view of the imaging device.
Abstract:
An exemplary image registration system identifies a subsurface structure at a surgical site based on subsurface imaging data from a subsurface image scan at the surgical site. The image registration system uses the identified subsurface structure at the surgical site for a registration of endoscopic imaging data from an endoscopic imaging modality with additional imaging data from an additional imaging modality. Corresponding systems and methods are also disclosed.
Abstract:
A first display image includes a first captured image captured by an imaging device and showing a first view of a surgical area, which shows surface anatomy located at the surgical area and an object located at the surgical area, and an augmentation region. The augmentation region occludes at least a portion of the first view. Subsequent to capture of the first captured image, the system detects an overlap between the object and the augmentation region and determines whether the overlap is due to motion of the object. The system directs the display device to display a second display image including a second captured image captured by the imaging device. Inclusion of the augmentation region in the second display image and/or an extent of the occlusion within the overlap in the second display image is based on the determination whether the overlap is due to motion of the object.
Abstract:
A method is provided to model a 3D structure comprising: producing a surface mesh representation of the 3D structure; producing a volume mesh representation of the 3D structure based upon the surface mesh; sorting vertices of the volume mesh into a first sub-list that includes only surface vertices and a second sub-list that includes only internal vertices; applying shading to the surface mesh by accessing only surface vertices in the first sub-list; determining deformation of the volume mesh by accessing both surface vertices in the first sub-list and internal vertices in the second sub-list.
Abstract:
A teleoperated surgical system is provided comprising: a first robotic surgical instrument; an image capture; a user display; a user input command device coupled to receive user input commands to control movement of the first robotic surgical instrument; and a movement controller coupled to scale a rate of movement of the first robotic surgical instrument, based at least in part upon a surgical skill level at using the first robotic surgical instrument of the user providing the received user input commands, from a rate of movement indicated by the user input commands received at the user input command device.
Abstract:
A teleoperated surgical system is provided comprising: a first robotic surgical instrument; an image capture; a user display; a user input command device coupled to receive user input commands to control movement of the first robotic surgical instrument; and a movement controller coupled to scale a rate of movement of the first robotic surgical instrument, based at least in part upon a surgical skill level at using the first robotic surgical instrument of the user providing the received user input commands, from a rate of movement indicated by the user input commands received at the user input command device.
Abstract:
A system may render, within a graphical user interface associated with a computer-assisted medical system, a graphical tag element associated with a physical location within a region of interest. The system may detect a user interaction with the graphical tag element. The system may further direct, in response to the detecting of the user interaction with the graphical tag element, the computer-assisted medical system to adjust a pose of an instrument based on the physical location within the region of interest.