Abstract:
A method and system for calculating a volume of resected tissue from a stream of intraoperative images is disclosed. A stream of 2D/2.5D intraoperative images of resected tissue of a patient is received. The 2D/2.5D intraoperative images in the stream are acquired at different angles with respect to the resected tissue. A resected tissue surface is segmented in each of the 2D/2.5D intraoperative images. The segmented resected tissue surfaces are stitched to generate a 3D point cloud representation of the resected tissue surface. A 3D mesh representation of the resected tissue surface is generated from the 3D point cloud representation of the resected tissue surface. The volume of the resected tissue is calculated from the 3D mesh representation of the resected tissue surface.
Abstract:
A method and system for determining fractional flow reserve (FFR) for a coronary artery stenosis of a patient is disclosed. In one embodiment, medical image data of the patient including the stenosis is received, a set of features for the stenosis is extracted from the medical image data of the patient, and an FFR value for the stenosis is determined based on the extracted set of features using a trained machine-learning based mapping. In another embodiment, a medical image of the patient including the stenosis of interest is received, image patches corresponding to the stenosis of interest and a coronary tree of the patient are detected, an FFR value for the stenosis of interest is determined using a trained deep neural network regressor applied directly to the detected image patches
Abstract:
In a method for image guided prostate cancer needle biopsy, a first registration is performed to match a first image of a prostate to a second image of the prostate (210). Third images of the prostate are acquired and compounded into a three-dimensional (3D) image (220). The prostate in the compounded 3D image is segmented to show its border (230). A second registration and then a third registration different from the second registration is performed on distance maps generated from the prostate borders of the first image and the compounded 3D image, wherein the first and second registrations are based on a biomechanical property of the prostate (240). A region of interest in the first image is mapped to the compounded 3D image or a fourth image of the prostate acquired with the second modality (250).
Abstract:
A projector in an endoscope is used to project visible light onto tissue. The projected intensity, color, and/or wavelength vary by spatial location in the field of view to provide an overlay. Rather than relying on a rendered overlay alpha-blended on a captured image, the illumination with spatial variation physically highlights one or more regions of interest or physically overlays on the tissue.
Abstract:
A computer-implemented method for analyzing digital holographic microscopy (DHM) data for hematology applications includes receiving a DHM image acquired using a digital holographic microscopy system. The DHM image comprises depictions of one or more cell objects and background. A reference image is generated based on the DHM image. This reference image may then be used to reconstruct a fringe pattern in the DHM image into an optical depth map.
Abstract:
A method and system for registration of 2D/2.5D laparoscopic or endoscopic image data to 3D volumetric image data is disclosed. A plurality of 2D/2.5D intra-operative images of a target organ are received, together with corresponding relative orientation measurements for the intraoperative images. A 3D medical image volume of the target organ is registered to the plurality of 2D/2.5D intra-operative images by calculating pose parameters to match simulated projection images of the 3D medical image volume to the plurality of 2D/2.5D intra-operative images, and the registration is constrained by the relative orientation measurements for the intra-operative images.
Abstract:
Systems and methods for model augmentation include receiving intra-operative imaging data of an anatomical object of interest at a deformed state. The intraoperative imaging data is stitched into an intra-operative model of the anatomical object of interest at the deformed state. The intra-operative model of the anatomical object of interest at the deformed state is registered with a pre-operative model of the anatomical object of interest at an initial state by deforming the pre-operative model of the anatomical object of interest at the initial state based on a biomechanical model. Texture information from the intra-operative model of the anatomical object of interest at the deformed state is mapped to the deformed pre-operative model to generate a deformed, texture-mapped pre-operative model of the anatomical object of interest.
Abstract:
A method for performing cellular classification includes generating a plurality of local dense Scale Invariant Feature Transform (SIFT) features based on a set of input images and converting the plurality of local dense SIFT features into a multi-dimensional code using a feature coding process. A first classification component is used to generate first output confidence values based on the multi-dimensional code and a plurality of global Local Binary Pattern Histogram (LBP-H) features are generated based on the set of input images. A second classification component is used to generate second output confidence values based on the plurality of LBP-H features and the first output confidence values and the second output confidence values are merged. Each of the set of input images may then be classified as one of a plurality of cell types using the merged output confidence values.
Abstract:
A method and system for scene parsing and model fusion in laparoscopic and endoscopic 2D/2.5D image data is disclosed. A current frame of an intra-operative image stream including a 2D image channel and a 2.5D depth channel is received. A 3D pre-operative model of a target organ segmented in pre-operative 3D medical image data is fused to the current frame of the intra-operative image stream. Semantic label information is propagated from the pre-operative 3D medical image data to each of a plurality of pixels in the current frame of the intra-operative image stream based on the fused pre-operative 3D model of the target organ, resulting in a rendered label map for the current frame of the intra-operative image stream. A semantic classifier is trained based on the rendered label map for the current frame of the intra-operative image stream.
Abstract:
A method and system for semantic segmentation laparoscopic and endoscopic 2D/2.5D image data is disclosed. Statistical image features that integrate a 2D image channel and a 2.5D depth channel of a 2D/2.5 laparoscopic or endoscopic image are extracted for each pixel in the image. Semantic segmentation of the laparoscopic or endoscopic image is then performed using a trained classifier to classify each pixel in the image with respect to a semantic object class of a target organ based on the extracted statistical image features. Segmented image masks resulting from the semantic segmentation of multiple frames of a laparoscopic or endoscopic image sequence can be used to guide organ specific 3D stitching of the frames to generate a 3D model of the target organ.