Abstract:
Mixed mode imaging is implemented using a single-chip image capture sensor with a color filter array. The single-chip image capture sensor captures a frame including a first set of pixel data and a second set of pixel data. The first set of pixel data includes a first combined scene, and the second set of pixel data includes a second combined scene. The first combined scene is a first weighted combination of a fluorescence scene component and a visible scene component due to the leakage of a color filter array. The second combined scene includes a second weighted combination of the fluorescence scene component and the visible scene component. Two display scene components are extracted from the captured pixel data in the frame and presented on a display unit.
Abstract:
An efficient demosaicing method includes reconstructing missing green pixels after estimating green-red color difference signals and green-blue color difference signals that are used in the reconstruction, and then constructing missing red and blue pixels using the color difference signals. This method creates a full resolution frame of red, green and blue pixels. The full resolution frame of pixels is sent to a display unit for display. In an efficient demosaicing process that includes local contrast enhancement, image contrast is enhanced to build and boost a brightness component from the green pixels, and to build chromatic components from all of the red, green, and blue pixels. Color difference signals are used in place of the red and blue pixels in building the chromatic components.
Abstract:
An example method includes causing an illuminator to simultaneously illuminate tissue with a first visible color illumination component, a second visible color illumination component, and a fluorescence excitation illumination component; and causing a display unit to display, based on the illumination, a display scene that includes: a reduced color scene component corresponding to the first and second visible color illumination components, and a highlighted scene component corresponding to the fluorescence excitation illumination component.
Abstract:
A system comprises a first robotic arm adapted to support and move a tool and a second robotic arm adapted to support and move a camera. The system also comprises an input device, a display, and a processor. The processor is configured to, in a first mode, command the first robotic arm to move the camera in response to a first input received from the input device to capture an image of the tool and present the image as a displayed image on the display. The processor is configured to, in a second mode, display a synthetic image of the first robotic arm in a boundary area around the captured image on the display, and in response to a second input, change a size of the boundary area relative a size of the displayed image.
Abstract:
A method performed by a processor comprises determining a depth value of a target area relative to the image capturing device. The method also includes determining an adjustment to the brightness control so that a brightness level of images captured by the image capturing device is lowered if the depth value of the target area relative to the image capturing device is less than a threshold depth value. The method also includes determining whether an adjustment of the brightness control is to be automatically or manually adjusted. If the adjustment of the brightness control is to be manually adjusted, the processor over-rides manual control of the brightness control if an operator of the brightness control is causing the brightness level of images not to be lowered if the depth value of the target area relative to the image capturing device is less than the threshold depth value.
Abstract:
A calibration target comprises a target pattern plane having a planar surface and a plurality of markers disposed on the planar surface. An optical axis of the imaging system is at a first angle with respect to the planar surface of the target pattern plane when the calibration target is being used to calibrate the imaging system. The plurality of markers is pre-warped in size and aspect ratio using a set of trigonometric functions that use the first angle and a distance from the imaging system to the marker, so that each of the plurality of markers appears a substantially same size as all others of the plurality of markers when viewed by the imaging system at the first angle. The plurality of markers includes a plurality of localizer features that have known relative positions on the target pattern plane and are used to determine an orientation for each marker.
Abstract:
An imaging system comprises an image capturing device, a viewer, a control element, and a processor. The control element controls or adjusts an image characteristic of one of the image capturing device and the viewer. The processor is programmed to determine a depth value relative to the image capturing device, determine a desirable adjustment to the control element by using the determined depth value, and control adjustment of the control element to assist manual adjustment of the control element to the desirable adjustment. The processor may also be programmed to determine whether the adjustment of the control element is to be automatically or manually adjusted and control adjustment of the control element automatically to the desirable adjustment if the control element is to be automatically adjusted.
Abstract:
A bleeding detection unit in a surgical system processes information in an acquired scene before that scene is presented on a display unit in the operating room. For example, the bleeding detection unit analyzes the pixel data in the acquired scene and determines whether there are one or more initial sites of blood in the scene. Upon detection of an initial site of blood, the region is identified by an initial site icon in the scene displayed on the display unit. In one aspect, the processing is done in real-time which means that there is no substantial delay in presenting the acquired scene to the surgeon.
Abstract:
The present disclosure relates to calibration assemblies and methods for use with an imaging system, such as an endoscopic imaging system. A calibration assembly includes: an interface for constraining engagement with an endoscopic imaging system; a target coupled with the interface so as to be within the field of view of the imaging system, the target including multiple of markers having calibration features that include identification features; and a processor configured to identify from first and second images obtained at first and second relative spatial arrangements between the imaging system and the target, respectively, at least some of the markers from the identification features, and using the identified markers and calibration feature positions within the images to generate calibration data.
Abstract:
The present disclosure relates to systems, methods, and tools for tool tracking using image-derived data from one or more tool-located references features. In some embodiments, a medical system includes a tool having a distal end that is insertable into a patient body, an image capture device insertable into the patient body so that the image capture device captures an image of at least a portion of a two-dimensional marker at least partially surrounding a portion of the tool, and a processor coupled to the image capture device and configured to determine a pose of the tool by processing the image.