Abstract:
In one embodiment of the invention, a digital zoom and panning system for digital video is disclosed including an image acquisition device to capture digital video images; an image buffer to store one or more frames of digital video images as source pixels; a display device having first pixels to display images; a user interface to accept user input including a source rectangle to select source pixels within frames of the digital video images, a destination rectangle to select target pixels within the display device to display images, and a region of interest within the digital video images to display in the destination rectangle; and a digital mapping and filtering device to selectively map and filter source pixels in the region of interest from the image buffer into target pixels of the display device in response to the user input.
Abstract:
The present invention is directed to an articulate minimally invasive surgical endoscope with a flexible wrist having at least one degree of freedom. When used with a surgical robot having a plurality of robot arms, the endoscope can be used with any of the plurality of arms thereby allowing the use a universal arm design. The endoscope in accordance to the present invention is made more intuitive to a user by attaching a reference frame used for controlling the at least one degree of freedom motion to the flexible wrist for wrist motion associated with the at least one degree of freedom. The endoscope in accordance to the present invention attenuates undesirable motion at its back/proximal end by acquiring the image of the object in association with the at least one degree of freedom based on a reference frame rotating around a point of rotation located proximal to the flexible wrist.
Abstract:
A synthetic representation of a robot tool for display on a user interface of a robotic system. The synthetic representation may be used to show the position of a view volume of an image capture device with respect to the robot. The synthetic representation may also be used to find a tool that is outside of the field of view, to display range of motion limits for a tool, to remotely communicate information about the robot, and to detect collisions.
Abstract:
In one embodiment, a surgical instrument includes a housing linkable with a manipulator arm of a robotic surgical system, a shaft operably coupled to the housing, a force transducer on a distal end of the shaft, and a plurality of fiber optic strain gauges on the force transducer. In one example, the plurality of strain gauges are operably coupled to a fiber optic splitter or an arrayed waveguide grating (AWG) multiplexer. A fiber optic connector is operably coupled to the fiber optic splitter or the AWG multiplexer. A wrist joint is operably coupled to a distal end of the force transducer, and an end effector is operably coupled to the wrist joint. In another embodiment, a robotic surgical manipulator includes a base link operably coupled to a distal end of a manipulator positioning system, and a distal link movably coupled to the base link, wherein the distal link includes an instrument interface and a fiber optic connector optically linkable to a surgical instrument. A method of passing data between an instrument and a manipulator via optical connectors is also provided.
Abstract:
An apparatus is configured to show telestration in 3-D to a surgeon in real time. A proctor is shown one side of a stereo image pair, such that the proctor can draw a telestration line on the one side with an input device. Points of interest are identified for matching to the other side of the stereo image pair. In response to the identified points of interest, regions and features are identified and used to match the points of interest to the other side. Regions can be used to match the points of interest. Features of the first image can be matched to the second image and used to match the points of interest to the second image, for example when the confidence scores for the regions are below a threshold value. Constraints can be used to evaluate the matched points of interest, for example by excluding bad points.
Abstract:
In one embodiment of the invention, a method is disclosed to locate a robotic instrument in the field of view of a camera. The method includes capturing sequential images in a field of view of a camera. The sequential images are correlated between successive views. The method further includes receiving a kinematic datum to provide an approximate location of the robotic instrument and then analyzing the sequential images in response to the approximate location of the robotic instrument. An additional method for robotic systems is disclosed. Further disclosed is a method for indicating tool entrance into the field of view of a camera.
Abstract:
A robotic system provides user selectable actions associated with gaze tracking according to user interface types. User initiated correction and/or recalibration of the gaze tracking may be performed during the processing of individual of the user selectable actions.