Abstract:
A method and system for scene parsing and model fusion in laparoscopic and endoscopic 2D/2.5D image data is disclosed. A current frame of an intra-operative image stream including a 2D image channel and a 2.5D depth channel is received. A 3D pre-operative model of a target organ segmented in pre-operative 3D medical image data is fused to the current frame of the intra-operative image stream. Semantic label information is propagated from the pre-operative 3D medical image data to each of a plurality of pixels in the current frame of the intra-operative image stream based on the fused pre-operative 3D model of the target organ, resulting in a rendered label map for the current frame of the intra-operative image stream. A semantic classifier is trained based on the rendered label map for the current frame of the intra-operative image stream.
Abstract:
A method for performing cellular classification includes using a convolution sparse coding process to generate a plurality of feature maps based on a set of input images and a plurality of biologically-specific filters. A feature pooling operation is applied on each of the plurality of feature maps to yield a plurality of image representations. Each image representation is classified as one of a plurality of cell types.
Abstract:
A method for predicting short-term cloud coverage includes a computer calculating an estimated cloud velocity field at a current time value based on sky images. The computer determines a segmented cloud model based on the sky images, a future sun location corresponding to a future time value, and sun pixel locations at the future time value based on the future sun location. Next, the computer applies a back-propagation algorithm to the sun pixel locations using the estimated cloud velocity field to yield propagated sun pixel locations corresponding to a previous time value. Then, the computer predicts cloud coverage for the future sun location based on the propagated sun pixel locations and the segmented cloud model.
Abstract:
Independent subspace analysis (ISA) is used to learn (42) filter kernels for CLE images in brain tumor classification. Convolution (46) and stacking are used for unsupervised learning (44, 48) with ISA to derive the filter kernels. A classifier is trained (56) to classify CLE brain images based on features extracted using the filter kernels. The resulting filter kernels and trained classifier are used (60, 64) to assist in diagnosis of occurrence of brain tumors during or as part of neurosurgical resection. The classification may assist a physician in detecting whether CLE examined brain tissue is healthy or not and/or a type of tumor.
Abstract:
A method for performing cellular classification includes extracting a plurality of local feature descriptors from a set of input images and applying a coding process to covert each of the plurality of local feature descriptors into a multi-dimensional code. A feature pooling operation is applied on each of the plurality of local feature descriptors to yield a plurality of image representations and each image representation is classified as one of a plurality of cell types.
Abstract:
A method for predicting short-term cloud coverage includes a computer calculating an estimated cloud velocity field at a current time value based on sky images. The computer determines a segmented cloud model based on the sky images, a future sun location corresponding to a future time value, and sun pixel locations at the future time value based on the future sun location. Next, the computer applies a back-propagation algorithm to the sun pixel locations using the estimated cloud velocity field to yield propagated sun pixel locations corresponding to a previous time value. Then, the computer predicts cloud coverage for the future sun location based on the propagated sun pixel locations and the segmented cloud model.
Abstract:
Robust calcification tracking is provided in fluoroscopic imagery. A patient with an inserted catheter is scanned over time. A processor detects the catheter in the patient from the scanned image data. The processor tracks the movement of the catheter. The processor also detects a structure represented in the data. The structure is detected as a function of movement with a catheter. The processor tracks the movement of the structure using sampling based on a previous location of the structure in the patient. The processor may output an image of the structure.
Abstract:
A method and system for semantic segmentation laparoscopic and endoscopic 2D/2.5D image data is disclosed. Statistical image features that integrate a 2D image channel and a 2.5D depth channel of a 2D/2.5 laparoscopic or endoscopic image are extracted for each pixel in the image. Semantic segmentation of the laparoscopic or endoscopic image is then performed using a trained classifier to classify each pixel in the image with respect to a semantic object class of a target organ based on the extracted statistical image features. Segmented image masks resulting from the semantic segmentation of multiple frames of a laparoscopic or endoscopic image sequence can be used to guide organ specific 3D stitching of the frames to generate a 3D model of the target organ.
Abstract:
A method and system for classifying tissue endomicroscopy images are disclosed. Local feature descriptors are extracted from an endomicroscopy image. Each of the local feature descriptors is encoded using a learnt discriminative dictionary. The learnt discriminative dictionary includes class-specific sub-dictionaries and penalizes correlation between bases of sub-dictionaries associated with different classes. Tissue in the endomicroscopy image is classified using a trained machine learning based classifier based on the coded local feature descriptors encoded using a learnt discriminative dictionary.