Abstract:
A method of processing images captured by an in vivo capsule camera is disclosed. The images having large overlap exceeding a threshold are stitched into larger images. If the current image and none of its neighboring images has large overlap, the current image is designated as a non-stitched image. Any image, that exists between two images stitched and is not included in the stitched image, is also designated as a non-stitched image. The large-overlap stitching can be performed on the images iteratively by treating the stitched images and non-stitched image as to be processed images in the next round. A second stage stitching can be applied to stitch small-overlap images. The small-overlap image stitching can also be applied iteratively. A third stage stitching can be further applied to stitch the output images from the second stage processing.
Abstract:
A method and apparatus for imaging a body lumen are disclosed. According to the method, an imaging apparatus is induced into the body lumen. Structured light from the imaging apparatus is projected into the body lumen. The structured light reflected from anatomical features in the body lumen is detected by the imaging apparatus. A first structured light image is generated from the detected structured light by the imaging apparatus. Non-structured light is emitted from the imaging apparatus into the body lumen. The non-structured light reflected from the anatomical features in the body lumen is detected by the imaging apparatus. A non-structured light image is generated from the detected non-structured light by the imaging apparatus. The frame period of the first structured light image is shorter than the frame period of the non-structured light image. In one embodiment, the imaging apparatus corresponds to a capsule endoscope.
Abstract:
A method and device for improving the accuracy of depth information derived from a structured-light image for a regular image are disclosed. In one example, an additional structured-light image is captured before a first structured-light image or after a regular image. The depth information for the regular image can be derived from the first structured-light image and corrected by incorporating depth information from the additional structured-light image. A model for depth information can be used to predict or interpolate depth information for the regular image. In another example, two regular sub-images may be captured with a structured-light image in between. If substantial frame differences or substantial global motion vector/block motion vectors are detected, the two regular sub-images will not be combined in order to avoid possible motion smear. Instead, one of the two sub-images will be selected and scaled as the output regular image.
Abstract:
An integrated image sensor for capturing a mixed structured-light image and regular image using an integrated image sensor are disclosed. The integrated image sensor comprises a pixel array, one or more output circuits, one or more analog-to-digital converters, and one or more timing and control circuits. The timing and control circuits are arranged to perform a set of actions including capturing a regular image and a structured-light image. According to the present invention, the structured-light image captured before or after the regular image is used to derive depth or shape information for the regular image. An endoscope based on the above integrated image sensor is also disclosed. The endoscope may comprises a capsule housing adapted to be swallowed, where the components of integrated image sensor, a structured light source and anon-structured light source are enclosed and sealed in the capsule housing.
Abstract:
A method and device for capturing a mixed structured-light image and regular image using an integrated image sensor are disclosed, where the structured-light image is captured using a shorter frame period than the regular image. In order to achieve a shorter frame period for the structured-light image, the structured-light image may correspond to an image captured with reduced dynamic range, reduced spatial resolution, or a combination of them. The capturing process comprises applying reset signals to a pixel array to reset rows of pixels of the pixel array, reading-out analog signals from the rows of pixels of the pixel array and converting the analog signals from the rows of pixels of the pixel array into digital outputs for the image using one or more analog-to-digital converters.
Abstract:
An integrated image sensor for capturing a mixed structured-light image and regular image using an integrated image sensor are disclosed. The integrated image sensor comprises a pixel array, one or more output circuits, one or more analog-to-digital converters, and one or more timing and control circuits. The timing and control circuits are arranged to perform a set of actions including capturing a regular image and a structured-light image. According to the present invention, the structured-light image captured before or after the regular image is used to derive depth or shape information for the regular image. An endoscope based on the above integrated image sensor is also disclosed. The endoscope may comprises a capsule housing adapted to be swallowed, where the components of integrated image sensor, a structured light source and anon-structured light source are enclosed and sealed in the capsule housing.
Abstract:
A method and device for improving the accuracy of depth information derived from a structured-light image for a regular image are disclosed. In one example, an additional structured-light image is captured before a first structured-light image or after a regular image. The depth information for the regular image can be derived from the first structured-light image and corrected by incorporating depth information from the additional structured-light image. A model for depth information can be used to predict or interpolate depth information for the regular image. In another example, two regular sub-images may be captured with a structured-light image in between. If substantial frame differences or substantial global motion vector/block motion vectors are detected, the two regular sub-images will not be combined in order to avoid possible motion smear. Instead, one of the two sub-images will be selected and scaled as the output regular image.
Abstract:
A method and apparatus for imaging a body lumen are disclosed. According to the method, an imaging apparatus is induced into the body lumen. Structured light from the imaging apparatus is projected into the body lumen. The structured light reflected from anatomical features in the body lumen is detected by the imaging apparatus. A first structured light image is generated from the detected structured light by the imaging apparatus. Non-structured light is emitted from the imaging apparatus into the body lumen. The non-structured light reflected from the anatomical features in the body lumen is detected by the imaging apparatus. A non-structured light image is generated from the detected non-structured light by the imaging apparatus. The frame period of the first structured light image is shorter than the frame period of the non-structured light image. In one embodiment, the imaging apparatus corresponds to a capsule endoscope.
Abstract:
A method of processing images captured using a capsule camera is disclosed. According to one embodiment, two images designated as a reference image and a float image are received, where the float image corresponds to a captured capsule image and the reference image corresponds to a previously composite image or another captured capsule image prior to the float image. Automatic segmentation is applied to the float image and the reference image to detect any non-GI (non-gastrointestinal) region. The non-GI regions are excluded in match measure between the reference image and a deformed float image during the registration process. The two images are stitched together by rendering the two images at the common coordinate. In another embodiment, large area of non-GI regions are removed directly from the input image, and remaining portions are stitched together to form a new image without performing image registration.
Abstract:
A method of automatic allocation of processing power and system resource for an image viewing and processing application is disclosed. The usage of a processing unit or system resources consumed by other computing-processes on the computer is determined. The usage required by the image viewing and processing application is determined. Then, based on the usage consumed by other computing-processes and the usage required by the image viewing and processing application, the adequacy of the processing unit or system resources for executing the image viewing and processing application is assessed. If the adequacy of the processing unit or system resources for executing the image viewing and processing application is not satisfied, the usage of the processing unit or system resources consumed by other computing-processes associated with other applications is displayed. Also, options to select one or more other applications for termination via a user interface are displayed. The selected applications selected are terminated to reduce the usage of the processing unit or system resources.