Abstract:
A system comprises a first robotic arm adapted to support and move a tool and a second robotic arm adapted to support and move a camera. The system also comprises an input device, a display, and a processor. The processor is configured to, in a first mode, command the first robotic arm to move the camera in response to a first input received from the input device to capture an image of the tool and present the image as a displayed image on the display. The processor is configured to, in a second mode, display a synthetic image of the first robotic arm in a boundary area around the captured image on the display, and in response to a second input, change a size of the boundary area relative a size of the displayed image.
Abstract:
A LUS robotic surgical system is trainable by a surgeon to automatically move a LUS probe in a desired fashion upon command so that the surgeon does not have to do so manually during a minimally invasive surgical procedure. A sequence of 2D ultrasound image slices captured by the LUS probe according to stored instructions are processable into a 3D ultrasound computer model of an anatomic structure, which may be displayed as a 3D or 2D overlay to a camera view or in a PIP as selected by the surgeon or programmed to assist the surgeon in inspecting an anatomic structure for abnormalities. Virtual fixtures are definable so as to assist the surgeon in accurately guiding a tool to a target on the displayed ultrasound image.
Abstract:
In one embodiment, a digital zoom and panning system for digital video is disclosed including an image acquisition device to capture digital video images; an image buffer to store one or more frames of digital video images as source pixels; a display device having first pixels to display images; a user interface to accept user input including a source rectangle to select source pixels within frames of the digital video images, a destination rectangle to select target pixels within the display device to display images, and a region of interest within the digital video images to display in the destination rectangle; and a digital mapping and filtering device to selectively map and filter source pixels in the region of interest from the image buffer into target pixels of the display device in response to the user input.
Abstract:
An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the work site on a display. Tool information is provided in the operator's current gaze area on the display by rendering the tool information over the tool so as not to obscure objects being worked on at the time by the tool nor to require eyes of the user to refocus when looking at the tool information and the image of the tool on a stereo viewer.
Abstract:
An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the work site on a display. Tool information is provided in the operator's current gaze area on the display by rendering the tool information over the tool so as not to obscure objects being worked on at the time by the tool nor to require eyes of the user to refocus when looking at the tool information and the image of the tool on a stereo viewer.
Abstract:
An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the work site on a display. Tool information is provided in the operator's current gaze area on the display by rendering the tool information over the tool so as not to obscure objects being worked on at the time by the tool nor to require eyes of the user to refocus when looking at the tool information and the image of the tool on a stereo viewer.
Abstract:
A surgical site is simultaneously illuminated by less than all the visible color components that make up visible white light, and a fluorescence excitation illumination component by an illuminator in a minimally invasive surgical system. An image capture system acquires an image for each of the visible color components illuminating the surgical site and a fluorescence image, which is excited by the fluorescence excitation component from the illuminator. The minimally invasive surgical system uses the acquired images to generate a background black and white image of the surgical site. The acquired fluorescence image is superimposed on the background black and white image, and is highlighted in a selected color, e.g., green. The background black and white image with the superimposed highlighted fluorescence image is displayed for a user of the system. The highlighted fluorescence image identifies tissue of clinical interest.
Abstract:
A surgical instrument that includes a housing linkable with a manipulator arm of a robotic surgical system, a shaft coupled to the housing, a force transducer on a distal end of the shaft, and a plurality of fiber optic strain gauges on the force transducer is disclosed. The plurality of strain gauges are coupled to a fiber optic splitter or an arrayed waveguide grating (AWG) multiplexer, which can be coupled to a fiber optic connector. A wrist joint coupled to an end effector is coupled to a distal end of the force transducer. A robotic surgical manipulator that includes a base link coupled to a distal end of a manipulator positioning system, and a distal link with an instrument interface, and a fiber optic connector optically linkable to a surgical instrument. A method of passing data between an instrument and a manipulator via optical connectors is also provided.
Abstract:
An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the work site on a display. Tool information is provided in the operator's current gaze area on the display by rendering the tool information over the tool so as not to obscure objects being worked on at the time by the tool nor to require eyes of the user to refocus when looking at the tool information and the image of the tool on a stereo viewer.
Abstract:
A medical robotic system includes a viewer, a gaze tracker, and a processor programmed to: draw an area or volume defining shape overlaid on an image based on the tracked gaze point after the user has gazed on the tracked gaze point for a programmed period of time; in response to receiving a user-selected action command, assign a fixed virtual constraint to the area or volume defining shape and constrain movement of a robotic tool; map points of the robotic tool in a tool reference frame to a viewer reference frame; determine a closest object to the tracked gaze point is the robotic tool based at least in part on the mapped points; display an object including text identifying the robotic tool, overlaid on the image, proximate to the robotic tool based on determining the robotic tool is the closest object; and perform an action indicated by the object.