Abstract:
An endoscope with an optical channel is held and positioned by a robotic surgical system. A capture unit captures (1) a visible first image at a first time and (2) a visible second image combined with a fluorescence image from the light at a second time. An image processing system receives (1) the visible first image and (2) the visible second image combined with the fluorescence image and generates at least one fluorescence image. A display system outputs an output image including an artificial fluorescence image.
Abstract:
Mixed mode imaging is implemented using a single-chip image capture sensor with a color filter array. The single-chip image capture sensor captures a frame including a first set of pixel data and a second set of pixel data. The first set of pixel data includes a first combined scene, and the second set of pixel data includes a second combined scene. The first combined scene is a first weighted combination of a fluorescence scene component and a visible scene component due to the leakage of a color filter array. The second combined scene includes a second weighted combination of the fluorescence scene component and the visible scene component. Two display scene components are extracted from the captured pixel data in the frame and presented on a display unit.
Abstract:
An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the work site on a display. Tool information is provided in the operator's current gaze area on the display by rendering the tool information over the tool so as not to obscure objects being worked on at the time by the tool nor to require eyes of the user to refocus when looking at the tool information and the image of the tool on a stereo viewer.
Abstract:
An exemplary method includes a demosaic module receiving a frame of raw pixels having values in a first color space, generating vertical and horizontal color difference signals using information from the frame of raw pixels, generating reconstructed color pixel values for raw pixels that are missing captured color pixel values of a first color component of the first color space, transforming the captured and reconstructed color pixel values of the first color component to brightness pixel values in a second color space, and transforming the vertical and horizontal color difference signals into chrominance pixel values in the second color space.
Abstract:
A medical robotic system includes a viewer, a gaze tracker, and a processor programmed to: draw an area or volume defining shape overlaid on an image based on the tracked gaze point after the user has gazed on the tracked gaze point for a programmed period of time; in response to receiving a user-selected action command, assign a fixed virtual constraint to the area or volume defining shape and constrain movement of a robotic tool; map points of the robotic tool in a tool reference frame to a viewer reference frame; determine a closest object to the tracked gaze point is the robotic tool based at least in part on the mapped points; display an object including text identifying the robotic tool, overlaid on the image, proximate to the robotic tool based on determining the robotic tool is the closest object; and perform an action indicated by the object.
Abstract:
A robotic system provides user selectable actions associated with gaze tracking according to user interface types. User initiated correction and/or recalibration of the gaze tracking may be performed during the processing of individual of the user selectable actions.
Abstract:
A system comprises a first robotic arm adapted to support and move a tool and a second robotic arm adapted to support and move a camera configured to capture an image of a camera field of view. The system further comprises an input device, a display, and a processor. The processor is configured to display a first synthetic image including a first synthetic image of the tool. The first synthetic image of the tool includes a portion of the tool outside of the camera field of view. The processor is also configured to receive a user input at the input device and responsive to the user input, change the display of the first synthetic image to a display of a second synthetic image including a second synthetic image of the tool that is different from the first synthetic image of the tool.
Abstract:
A system may comprise a tool including at least one reference feature. a processor, and a memory having computer readable instructions stored thereon. The computer readable instructions, when executed by the processor, may cause the system to receive image data including an image of the tool and the at least one reference feature, determine a pose of the tool from the image data, and modify the image data to visually decrement a portion of the image data corresponding to the at least one reference feature.
Abstract:
A system may comprise an image capture device to capture an image of a work site. The system may also comprise a processor to determine whether a tool disposed at the work site is energized and determine a first area of the captured image of the work site, including or adjacent to an image of a portion of the tool in the captured image. The processor may also determine a second area of the captured image including a remainder of the captured image that does not include the first area. Conditioned upon determining that the tool is energized, at least one of the first area and the second area of the captured image of the work site may be processed to indicate that the tool in the first area is being energized. The image of the work site with the first area and the second area may be displayed.
Abstract:
An example method includes receiving a single frame comprising a first set of pixel data that includes a fluorescence scene component and a second set of pixel data that includes a combination of a visible color component scene and the fluorescence scene component; and generating, based on the first set of pixel data that includes the fluorescence scene component and the second set of pixel data that includes the combination of the visible color component scene and the fluorescence scene component, a display scene.