Abstract:
A synthetic representation of a robot tool for display on a user interface of a robotic system. The synthetic representation may be used to show the position of a view volume of an image capture device with respect to the robot. The synthetic representation may also be used to find a tool that is outside of the field of view, to display range of motion limits for a tool, to remotely communicate information about the robot, and to detect collisions.
Abstract:
The present disclosure relates to calibration assemblies and methods for use with an imaging system, such as an endoscopic imaging system. A calibration assembly includes: an interface for constraining engagement with an endoscopic imaging system; a target coupled with the interface so as to be within the field of view of the imaging system, the target including multiple of markers having calibration features that include identification features; and a processor configured to identify from first and second images obtained at first and second relative spatial arrangements between the imaging system and the target, respectively, at least some of the markers from the identification features, and using the identified markers and calibration feature positions within the images to generate calibration data.
Abstract:
An imaging system comprises an image capturing device, a viewer, a control element, and a processor. The control element controls or adjusts an image characteristic of one of the image capturing device and the viewer. The processor is programmed to determine a depth value relative to the image capturing device, determine a desirable adjustment to the control element by using the determined depth value, and control adjustment of the control element to assist manual adjustment of the control element to the desirable adjustment. The processor may also be programmed to determine whether the adjustment of the control element is to be automatically or manually adjusted and control adjustment of the control element automatically to the desirable adjustment if the control element is to be automatically adjusted.
Abstract:
A bleeding detection unit in a surgical system processes information in an acquired scene before that scene is presented on a display unit in the operating room. For example, the bleeding detection unit analyzes the pixel data in the acquired scene and determines whether there are one or more initial sites of blood in the scene. Upon detection of an initial site of blood, the region is identified by an initial site icon in the scene displayed on the display unit. In one aspect, the processing is done in real-time which means that there is no substantial delay in presenting the acquired scene to the surgeon.
Abstract:
An imaging system processes images of a plurality of objects which have been captured by an image capture device for display. Normal processing of the images is modified as either a function of a depth corresponding to one or more of the plurality of objects appearing in the captured images relative to the image capture device or as a function of the depth and one or more image characteristics extracted from the captured images. A depth threshold may be used to avoid inadvertent modifications due to noise.
Abstract:
An imaging system processes images of a plurality of objects which have been captured by an image capture device for display. Normal processing of the images is modified as either a function of a depth corresponding to one or more of the plurality of objects appearing in the captured images relative to the image capture device or as a function of the depth and one or more image characteristics extracted from the captured images. A depth threshold may be used to avoid inadvertent modifications due to noise.
Abstract:
In one embodiment of the invention, an apparatus is disclosed including an image sensor, a color filter array, and an image processor. The image sensor has an active area with a matrix of camera pixels. The color filter array is in optical alignment over the matrix of the camera pixels. The color filter array assigns alternating single colors to each camera pixel. The image processor receives the camera pixels and includes a correlation detector to detect spatial correlation of color information between pairs of colors in the pixel data captured by the camera pixels. The correlation detector further controls demosaicing of the camera pixels into full color pixels with improved resolution. The apparatus may further include demosaicing logic to demosaic the camera pixels into the full color pixels with improved resolution in response to the spatial correlation of the color information between pairs of colors.
Abstract:
Stereo gaze tracking estimates a 3-D gaze point by projecting determined right and left eye gaze points on left and right stereo images. The determined right and left eye gaze points are based on one or more tracked eye gaze points, estimates for non-tracked eye gaze points based upon the tracked gaze points and image matching in the left and right stereo images, and confidence scores indicative of the reliability of the tracked gaze points and/or the image matching.
Abstract:
Mixed mode imaging is implemented using a single-chip image capture sensor with a color filter array. The single-chip image capture sensor captures a frame including a first set of pixel data and a second set of pixel data. The first set of pixel data includes a first combined scene, and the second set of pixel data includes a second combined scene. The first combined scene is a first weighted combination of a fluorescence scene component and a visible scene component due to the leakage of a color filter array. The second combined scene includes a second weighted combination of the fluorescence scene component and the visible scene component. Two display scene components are extracted from the captured pixel data in the frame and presented on a display unit.
Abstract:
The present disclosure relates to calibration assemblies and methods for use with an imaging system, such as an endoscopic imaging system. A calibration assembly includes: an interface for constraining engagement with an endoscopic imaging system; a target coupled with the interface so as to be within the field of view of the imaging system, the target including multiple of markers having calibration features that include identification features; and a processor configured to identify from first and second images obtained at first and second relative spatial arrangements between the imaging system and the target, respectively, at least some of the markers from the identification features, and using the identified markers and calibration feature positions within the images to generate calibration data.