Abstract:
Provided are a system and method for image sharpening is provided that involves capturing an image, and then decomposing the image into a plurality of image-representation components, such as RGB components for example. Each image-representation component is transformed to obtain an unsharpened multi-resolution representation for each image-representation component. A multi-resolution representation includes a plurality of transformation level representations. Sharpness information is transported from an unsharpened transformation level representation of a first one of the image-representation components to a transformation level representation of an unsharpened multi-resolution representation of a second one of the image-representation components to create a sharpened multi-resolution representation of the second one of the image-representation components. The sharpened multi-resolution representation of the second one of the image-representation components is then transformed to obtain a sharpened image. The improved and sharpened image may then be displayed.
Abstract:
A synthetic representation of a robot tool for display on a user interface of a robotic system. The synthetic representation may be used to show the position of a view volume of an image capture device with respect to the robot. The synthetic representation may also be used to find a tool that is outside of the field of view, to display range of motion limits for a tool, to remotely communicate information about the robot, and to detect collisions.
Abstract:
A robotic system includes a processor that is programmed to determine and cause work site measurements for user specified points in the work site to be graphically displayed in order to provide geometrically appropriate tool selection assistance to the user. The processor is also programmed to determine an optimal one of a plurality of tools of varying geometries for use at the work site and to cause graphical representations of at least the optimal tool to be displayed along with the work site measurements.
Abstract:
An apparatus is configured to show telestration in 3-D to a surgeon in real time. A proctor is shown one side of a stereo image pair, such that the proctor can draw a telestration line on the one side with an input device. Points of interest are identified for matching to the other side of the stereo image pair. In response to the identified points of interest, regions and features are identified and used to match the points of interest to the other side. Regions can be used to match the points of interest. Features of the first image can be matched to the second image and used to match the points of interest to the second image, for example when the confidence scores for the regions are below a threshold value. Constraints can be used to evaluate the matched points of interest, for example by excluding bad points.
Abstract:
A robotic system includes a processor that is programmed to determine and cause work site measurements for user specified points in the work site to be graphically displayed in order to provide geometrically appropriate tool selection assistance to the user. The processor is also programmed to determine an optimal one of a plurality of tools of varying geometries for use at the work site and to cause graphical representations of at least the optimal tool to be displayed along with the work site measurements.
Abstract:
In one embodiment of the invention, a method is disclosed to locate a robotic instrument in the field of view of a camera. The method includes capturing sequential images in a field of view of a camera. The sequential images are correlated between successive views. The method further includes receiving a kinematic datum to provide an approximate location of the robotic instrument and then analyzing the sequential images in response to the approximate location of the robotic instrument. An additional method for robotic systems is disclosed. Further disclosed is a method for indicating tool entrance into the field of view of a camera.
Abstract:
A robotic system provides user selectable actions associated with gaze tracking according to user interface types. User initiated correction and/or recalibration of the gaze tracking may be performed during the processing of individual of the user selectable actions.
Abstract:
A stereo imaging system comprises a stereoscopic camera having left and right image capturing elements for capturing stereo images; a stereo viewer; and a processor configured to modify the stereo images prior to being displayed on the stereo viewer so that a disparity between corresponding points of the stereo images is adjusted as a function of a depth value within a region of interest in the stereo images after the depth value reaches a target depth value.
Abstract:
Methods of and a system for providing force information for a robotic surgical system. The method includes storing first kinematic position information and first actual position information for a first position of an end effector; moving the end effector via the robotic surgical system from the first position to a second position; storing second kinematic position information and second actual position information for the second position; and providing force information regarding force applied to the end effector at the second position utilizing the first actual position information, the second actual position information, the first kinematic position information, and the second kinematic position information. Visual force feedback is also provided via superimposing an estimated position of an end effector without force over an image of the actual position of the end effector. Similarly, tissue elasticity visual displays may be shown.
Abstract:
Methods of and a system for providing a visual representation of force information in a robotic surgical system. A real position of a surgical end effector is determined. A projected position of the surgical end effector if no force were applied against the end effector is also determined. Images representing the real and projected positions are output superimposed on a display. The offset between the two images provides a visual indication of a force applied to the end effector or to the kinematic chain that supports the end effector. In addition, tissue deformation information is determined and displayed.