Abstract:
A computer-implemented method for analyzing digital holographic microscopy (DHM) data for hematology applications includes receiving a DHM image acquired using a digital holographic microscopy system. The DHM image comprises depictions of one or more cell objects and background. A reference image is generated based on the DHM image. This reference image may then be used to reconstruct a fringe pattern in the DHM image into an optical depth map.
Abstract:
A system for performing adaptive focusing of a microscopy device comprises a microscopy device configured to acquire microscopy images depicting cells and one or more processors executing instructions for performing a method that includes extracting pixels from the microscopy images. Each set of pixels corresponds to an independent cell. The method further includes using a trained classifier to assign one of a plurality of image quality labels to each set of pixels indicating the degree to which the independent cell is in focus. If the image quality labels corresponding to the sets of pixels indicate that the cells are out of focus, a focal length adjustment for adjusting focus of the microscopy device is determined using a trained machine learning model. Then, executable instructions are sent to the microscopy device to perform the focal length adjustment.
Abstract:
Systems and methods are provided for classifying an abnormality in a medical image. An input medical image depicting a lesion is received. The lesion is localized in the input medical image using a trained localization network to generate a localization map. The lesion is classified based on the input medical image and the localization map using a trained classification network. The classification of the lesion is output. The trained localization network and the trained classification network are jointly trained.
Abstract:
The present invention relates to an improved method for marker-free detection of a cell type of at least one cell in a medium using microfluidics and digital holographic microscopy, as well as a device, particular for carrying out the method.
Abstract:
A method and system for classification of endoscopic images is disclosed. An initial trained deep network classifier is used to classify endoscopic images and determine confidence scores for the endoscopic images. The confidence score for each endoscopic image classified by the initial trained deep network classifier is compared to a learned confidence threshold. For endoscopic images with confidence scores higher than the learned threshold value, the classification result from the initial trained deep network classifier is output. Endoscopic images with confidence scores lower than the learned confidence threshold are classified using a first specialized network classifier built on a feature space of the initial trained deep network classifier.
Abstract:
A method and system for calculating a volume of resected tissue from a stream of intraoperative images is disclosed. A stream of 2D/2.5D intraoperative images of resected tissue of a patient is received. The 2D/2.5D intraoperative images in the stream are acquired at different angles with respect to the resected tissue. A resected tissue surface is segmented in each of the 2D/2.5D intraoperative images. The segmented resected tissue surfaces are stitched to generate a 3D point cloud representation of the resected tissue surface. A 3D mesh representation of the resected tissue surface is generated from the 3D point cloud representation of the resected tissue surface. The volume of the resected tissue is calculated from the 3D mesh representation of the resected tissue surface.
Abstract:
A method and system for determining fractional flow reserve (FFR) for a coronary artery stenosis of a patient is disclosed. In one embodiment, medical image data of the patient including the stenosis is received, a set of features for the stenosis is extracted from the medical image data of the patient, and an FFR value for the stenosis is determined based on the extracted set of features using a trained machine-learning based mapping. In another embodiment, a medical image of the patient including the stenosis of interest is received, image patches corresponding to the stenosis of interest and a coronary tree of the patient are detected, an FFR value for the stenosis of interest is determined using a trained deep neural network regressor applied directly to the detected image patches
Abstract:
In a method for image guided prostate cancer needle biopsy, a first registration is performed to match a first image of a prostate to a second image of the prostate (210). Third images of the prostate are acquired and compounded into a three-dimensional (3D) image (220). The prostate in the compounded 3D image is segmented to show its border (230). A second registration and then a third registration different from the second registration is performed on distance maps generated from the prostate borders of the first image and the compounded 3D image, wherein the first and second registrations are based on a biomechanical property of the prostate (240). A region of interest in the first image is mapped to the compounded 3D image or a fourth image of the prostate acquired with the second modality (250).
Abstract:
A projector in an endoscope is used to project visible light onto tissue. The projected intensity, color, and/or wavelength vary by spatial location in the field of view to provide an overlay. Rather than relying on a rendered overlay alpha-blended on a captured image, the illumination with spatial variation physically highlights one or more regions of interest or physically overlays on the tissue.
Abstract:
A method and system for registration of 2D/2.5D laparoscopic or endoscopic image data to 3D volumetric image data is disclosed. A plurality of 2D/2.5D intra-operative images of a target organ are received, together with corresponding relative orientation measurements for the intraoperative images. A 3D medical image volume of the target organ is registered to the plurality of 2D/2.5D intra-operative images by calculating pose parameters to match simulated projection images of the 3D medical image volume to the plurality of 2D/2.5D intra-operative images, and the registration is constrained by the relative orientation measurements for the intra-operative images.