Abstract:
A system operating in a plurality of modes to provide an integrated analysis of molecular data, imaging data, and clinical data associated with a patient includes a multi-scale model, a molecular model, and a linking component. The multi-scale model is configured to generate one or more estimated multi-scale parameters based on the clinical data and the imaging data when the system operates in a first mode, and generate a model of organ functionality based on one or more inferred multi-scale parameters when the system operates in a second mode. The molecular model is configured to generate one or more first molecular findings based on a molecular network analysis of the molecular data, wherein the molecular model is constrained by the estimated parameters when the system operates in the first mode. The linking component, which is operably coupled to the multi-scale model and the molecular model, is configured to transfer the estimated multi-scale parameters from the multi-scale model to the molecular model when the system operates in the first mode, and generate, using a machine learning process, the inferred multi-scale parameters based on the molecular findings when the system operates in the second mode.
Abstract:
A method and system for integrating radiological and pathological information for cancer diagnosis, therapy selection, and monitoring is disclosed. A radiological image of a patient, such as a magnetic resonance (MR), computed tomography (CT), positron emission tomography (PET), or ultrasound image, is received. A location corresponding to each of one or more biopsy samples is determined in the at least one radiological image. An integrated display is used to display a histological image corresponding to the each biopsy samples, the radiological image, and the location corresponding to each biopsy samples in the radiological image. Pathological information and radiological information are integrated by combining features extracted from the histological images and the features extracted from the corresponding locations in the radiological image for cancer grading, prognosis prediction, and therapy selection.
Abstract:
Robotic navigation is provided for nuclear probe imaging. Using a three-dimensional scanner (19), the surface of a patient is determined (42). A calibrated robotic system positions (48) a nuclear probe about the patient based on the surface. The positioning (48) may be without contacting the patient and the surface may be used in reconstruction to account for spacing of the probe from the patient. By using the robotic system for positioning (48), the speed, resolution and/or quality of the reconstructed image may be predetermined, user settable, and/or improved compared to manual scanning. The reconstruction (52) may be more computationally efficient by providing for regular spacing of radiation detection locations within the volume
Abstract:
A method for segmenting an image includes registering an annotated template image to an acquired reference image using only rigid transformations to define a transformation function relating the annotated template image to the acquired reference image (S101). The defined transformation function is refined by registering the annotated template image to the acquired reference image using only affine transformations (S102). The refined transformation function is further refined by registering the annotated template image to the acquired reference image using only multi-affine transformations (S103). The twice refined transformation function is further refined by registering the annotated template image to the acquired reference image using deformation transformations (S104).
Abstract:
A method of identifying an optimum treatment for a patient suffering from coronary artery disease, comprising: (i) providing patient information selected from: (a) status in the patient of one or more coronary disease associated biomarkers; (b) one or more items of medical history information selected from prior condition history, intervention history and medication history; (c) one or more items of diagnostic history, if the patient has a diagnostic history; and (d) one or more items of demographic data; (ii) aggregating the patient information in: (a) a Bayesian network; (b) a machine learning and neural network; (c) a rule-based system; and (d) a regression-based system; (iii) deriving a predicted probabilistic adverse event outcome for each intervention comprising percutaneous coronary intervention by placement of a bare metal stent, or a drug-coated stent; or by coronary artery bypass grafting; and (iv) determining the intervention having the lowest predicted probabilistic adverse outcome.
Abstract:
A method and system for multi-scale anatomical and functional modeling of coronary circulation is disclosed. A patient-specific anatomical model of coronary arteries and the heart is generated from medical image data of a patient. A multi-scale functional model of coronary circulation is generated based on the patient-specific anatomical model. Blood flow is simulated in at least one stenosis region of at least one coronary artery using the multi-scale function model of coronary circulation. Hemodynamic quantities, such as fractional flow reserve (FFR), are computed to determine a functional assessment of the stenosis, and virtual intervention simulations are performed using the multi-scale function model of coronary circulation for decision support and intervention planning.
Abstract:
A method for distinguishing between different tissue types imaged in a virtual slide includes receiving an image of a tissue (200), wherein the tissue has been treated with a first stain and a second stain, dividing the image into a plurality of image patches (201), accentuating a difference between portions of the tissue stained by the first stain and portions of the tissue stained by the second stain to generated a plurality of preprocessed image patches (202), extracting a plurality of feature descriptors from each of the preprocessed image patches (203) according to a distribution of the portions of the tissue stained by the first stain and the portions of the tissue stained by the second stain, and classifying each of the image patches according to respective the feature descriptors (204).
Abstract:
A method for automatically classifying tissue includes obtaining training data including a plurality of microscope images that have been manually classified. A plurality of features is calculated from the training data, each of which is a texture feature, a network feature, or a morphometric feature. A subset of features is selected from the calculated subset of features based on both maximum relevance and minimum redundancy. A classifier is trained based on the selected subset of features and the manual classifications. A diagnostic microscope image is classified in a computer-aided diagnostic system using the trained classifier.
Abstract:
In a method for image guided prostate cancer needle biopsy, a first registration is performed to match a first image of a prostate to a second image of the prostate (210). Third images of the prostate are acquired and compounded into a three-dimensional (3D) image (220). The prostate in the compounded 3D image is segmented to show its border (230). A second registration and then a third registration different from the second registration is performed on distance maps generated from the prostate borders of the first image and the compounded 3D image, wherein the first and second registrations are based on a biomechanical property of the prostate (240). A region of interest in the first image is mapped to the compounded 3D image or a fourth image of the prostate acquired with the second modality (250).
Abstract:
A system operating in a plurality of modes to provide an integrated analysis of molecular data, imaging data, and clinical data associated with a patient includes a multi-scale model, a molecular model, and a linking component. The multi-scale model is configured to generate one or more estimated multi-scale parameters based on the clinical data and the imaging data when the system operates in a first mode, and generate a model of organ functionality based on one or more inferred multi-scale parameters when the system operates in a second mode. The molecular model is configured to generate one or more first molecular findings based on a molecular network analysis of the molecular data, wherein the molecular model is constrained by the estimated parameters when the system operates in the first mode. The linking component, which is operably coupled to the multi-scale model and the molecular model, is configured to transfer the estimated multi-scale parameters from the multi-scale model to the molecular model when the system operates in the first mode, and generate, using a machine learning process, the inferred multi-scale parameters based on the molecular findings when the system operates in the second mode.