-
公开(公告)号:US20210150710A1
公开(公告)日:2021-05-20
申请号:US17098422
申请日:2020-11-15
Inventor: Mohammad Reza Hosseinzadeh Taher , Fatemeh Haghighi , Jianming Liang
Abstract: Not only is annotating medical images tedious and time consuming, but it also demands costly, specialty-oriented expertise, which is not easily accessible. To address this challenge, a new self-supervised framework is introduced: TransVW (transferable visual words), exploiting the prowess of transfer learning with convolutional neural networks and the unsupervised nature of visual word extraction with bags of visual words, resulting in an annotation-efficient solution to medical image analysis. TransVW was evaluated using NIH ChestX-ray14 to demonstrate its annotation efficiency. When compared with training from scratch and ImageNet-based transfer learning, TransVW reduces the annotation efforts by 75% and 12%, respectively, in addition to significantly accelerating the convergence speed. More importantly, TransVW sets new records: achieving the best average AUC on all 14 diseases, the best individual AUC scores on 10 diseases, and the second best individual AUC scores on 3 diseases. This performance is unprecedented, because heretofore no self-supervised learning method has outperformed ImageNet-based transfer learning and no annotation reduction has been reported for self-supervised learning. These achievements are contributable to a simple yet powerful observation: The complex and recurring anatomical structures in medical images are natural visual words, which can be automatically extracted, serving as strong yet free supervision signals for CNNs to learn generalizable and transferable image representation via self-supervision.
-
公开(公告)号:US11763952B2
公开(公告)日:2023-09-19
申请号:US17180575
申请日:2021-02-19
Inventor: Fatemeh Haghighi , Mohammad Reza Hosseinzadeh Taher , Zongwei Zhou , Jianming Liang
IPC: G06K9/00 , G16H50/70 , G16H30/40 , G16H30/20 , G06F16/55 , G06N3/08 , G06F16/583 , G06F18/28 , G06F18/214 , G06V10/772 , G06V10/82
CPC classification number: G16H50/70 , G06F16/55 , G06F16/583 , G06F18/214 , G06F18/28 , G06N3/08 , G06V10/772 , G06V10/82 , G16H30/20 , G16H30/40
Abstract: Described herein are means for learning semantics-enriched representations via self-discovery, self-classification, and self-restoration in the context of medical imaging. Embodiments include the training of deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a collection of semantics-enriched pre-trained models, called Semantic Genesis. Other related embodiments are disclosed.
-
公开(公告)号:US11436725B2
公开(公告)日:2022-09-06
申请号:US17098422
申请日:2020-11-15
Inventor: Mohammad Reza Hosseinzadeh Taher , Fatemeh Haghighi , Jianming Liang
Abstract: Not only is annotating medical images tedious and time consuming, but it also demands costly, specialty-oriented expertise, which is not easily accessible. To address this challenge, a new self-supervised framework is introduced: TransVW (transferable visual words), exploiting the prowess of transfer learning with convolutional neural networks and the unsupervised nature of visual word extraction with bags of visual words, resulting in an annotation-efficient solution to medical image analysis. TransVW was evaluated using NIH ChestX-ray14 to demonstrate its annotation efficiency. When compared with training from scratch and ImageNet-based transfer learning, TransVW reduces the annotation efforts by 75% and 12%, respectively, in addition to significantly accelerating the convergence speed. More importantly, TransVW sets new records: achieving the best average AUC on all 14 diseases, the best individual AUC scores on 10 diseases, and the second best individual AUC scores on 3 diseases. This performance is unprecedented, because heretofore no self-supervised learning method has outperformed ImageNet-based transfer learning and no annotation reduction has been reported for self-supervised learning. These achievements are contributable to a simple yet powerful observation: The complex and recurring anatomical structures in medical images are natural visual words, which can be automatically extracted, serving as strong yet free supervision signals for CNNs to learn generalizable and transferable image representation via self-supervision.
-
公开(公告)号:US20240078666A1
公开(公告)日:2024-03-07
申请号:US18241809
申请日:2023-09-01
Inventor: Jiaxuan PANG , Fatemeh Haghighi , DongAo Ma , Nahid Ui Islam , Mohammad Reza Hosseinzadeh Taher , Jianming Liang
CPC classification number: G06T7/0012 , G06T7/11 , G06V10/54 , G16H30/40 , G06T2207/20081 , G06V2201/03
Abstract: A self-supervised machine learning method and system for learning visual representations in medical images. The system receives a plurality of medical images of similar anatomy, divides each of the plurality of medical images into its own sequence of non-overlapping patches, wherein a unique portion of each medical image appears in each patch in the sequence of non-overlapping patches. The system then randomizes the sequence of non-overlapping patches for each of the plurality of medical images, and randomly distorts the unique portion of each medical image that appears in each patch in the sequence of non-overlapping patches for each of the plurality of medical images. Thereafter, the system learns, via a vision transformer network, patch-wise high-level contextual features in the plurality of medical images, and simultaneously, learns, via the vision transformer network, fine-grained features embedded in the plurality of medical images.
-
公开(公告)号:US20230281805A1
公开(公告)日:2023-09-07
申请号:US18111136
申请日:2023-02-17
Inventor: Fatemeh Haghighi , Mohammad Reza Hosseinzadeh Taher , Jianming Liang
CPC classification number: G06T7/0012 , G06T5/002 , G06V10/761 , G06V10/762 , G16H30/40 , G16H50/20 , G06T2207/20081 , G06T2207/20132 , G06T2207/30096
Abstract: A Discriminative, Restorative, and Adversarial (DiRA) learning framework for self-supervised medical image analysis is described. For instance, a pre-trained DiRA framework may be applied to diagnosis and detection of new medical images which form no part of the training data. The exemplary DiRA framework includes means for receiving training data having medical images therein and applying discriminative learning, restorative learning, and adversarial learning via the DiRA framework by cropping patches from the medical images; inputting the cropped patches to the discriminative and restorative learning branches to generate discriminative latent features and synthesized images from each; and applying adversarial learning by executing an adversarial discriminator to perform a min-max function for distinguishing the synthesized restorative image from real medical images. The pre-trained model of the DiRA framework is then provided as output for use in generating predictions of disease within medical images.
-
公开(公告)号:US20230196642A1
公开(公告)日:2023-06-22
申请号:US18085145
申请日:2022-12-20
Inventor: Mohammad Reza Hosseinzadeh Taher , Fatemeh Haghighi , Jianming Liang
CPC classification number: G06T11/008 , G06T7/0012 , G06T2210/22 , G06T2210/41 , G06T2207/20081 , G06T2207/20084 , G06T2207/10116
Abstract: A self-supervised learning framework for empowering instance discrimination in medical imaging using Context-Aware instance Discrimination (CAiD), in which the trained deep models are then utilized for the processing of medical imaging. An exemplary system receives a plurality of medical images; trains a self-supervised learning framework to increasing instance discrimination for medical imaging using a Context-Aware instance Discrimination (CAiD) model using the received plurality of medical images; generates multiple cropped image samples and augments samples using image distortion; applies instance discrimination learning a mapping back to a corresponding original image; reconstructs the cropped image samples and applies an auxiliary context-aware learning loss operation; and generates as output, a pre-trained CAiD model based on the application of both (i) the instance discrimination learning and (ii) the auxiliary context-aware learning loss operation.
-
公开(公告)号:US20210343014A1
公开(公告)日:2021-11-04
申请号:US17246032
申请日:2021-04-30
Inventor: Fatemeh Haghighi , Mohammad Reza Hosseinzadeh Taher , Zongwei Zhou , Jianming Liang
Abstract: Described herein are means for the generation of semantic genesis models through self-supervised learning in the absence of manual labeling, in which the trained semantic genesis models are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured with means for performing a self-discovery operation which crops 2D patches or crops 3D cubes from similar patient scans received at the system as input; means for transforming each anatomical pattern represented within the cropped 2D patches or the cropped 3D cubes to generate transformed 2D anatomical patterns or transformed 3D anatomical patterns; means for performing a self-classification operation of the transformed anatomical patterns by formulating a C-way multi-class classification task for representation learning; means for performing a self-restoration operation by recovering original anatomical patterns from the transformed 2D patches or transformed 3D cubes having transformed anatomical patterns embedded therein to learn different sets of visual representation; and means for providing a semantics-enriched pre-trained AI model having a trained encoder-decoder structure with skip connections in between based on the performance of the self-discovery operation, the self-classification operation, and the self-restoration operation. Other related embodiments are disclosed.
-
公开(公告)号:US20230306723A1
公开(公告)日:2023-09-28
申请号:US18126318
申请日:2023-03-24
Inventor: DongAo Ma , Jiaxuan Pang , Nahid Ul Islam , Mohammad Reza Hosseinzadeh Taher , Fatemeh Haghighi , Jianming Liang
IPC: G06T7/00 , G06V10/776 , G06V10/774 , G06V10/764 , G06N3/0895
CPC classification number: G06V10/774 , G06N3/0895 , G06T7/0012 , G06V10/764 , G06V10/776 , G06T2207/10116 , G06T2207/20081 , G06T2207/20092 , G06T2207/30004 , G06V2201/03
Abstract: Described herein are systems, methods, and apparatuses for implementing self-supervised domain-adaptive pre-training via a transformer for use with medical image classification in the context of medical image analysis. An exemplary system includes means for receiving a first set of training data having non-medical photographic images; receiving a second set of training data with medical images; pre-training an AI model on the first set of training data with the non-medical photographic images; performing domain-adaptive pre-training of the AI model via self-supervised learning operations using the second set of training data having the medical images; generating a trained domain-adapted AI model by fine-tuning the AI model against the targeted medical diagnosis task using the second set of training data having the medical images; outputting the trained domain-adapted AI model; and executing the trained domain-adapted AI model to generate a predicted medical diagnosis from an input image not present within the training data.
-
公开(公告)号:US20230116897A1
公开(公告)日:2023-04-13
申请号:US17961896
申请日:2022-10-07
Inventor: Mohammad Reza Hosseinzadeh Taher , Fatemeh Haghighi , Ruibin Feng , Jianming Liang
IPC: G16H50/20 , G06T7/00 , G06V10/774 , G06V10/82
Abstract: Described herein are means for implementing systematic benchmarking analysis to improve transfer learning for medical image analysis. An exemplary system is configured with specialized instructions to cause the system to perform operations including: receiving training data having a plurality medical images therein; iteratively transforming a medical image from the training data into a transformed image by executing instructions for resizing and cropping each respective medical image from the training data to form a plurality of transformed images; applying data augmentation operations to the transformed images; applying segmentation operations to the augmented images; pre-training an AI model on different input images which are not included in the training data by executing self-supervised learning for the AI model; fine-tuning the pre-trained AI model to generate a pre-trained diagnosis and detection AI model; applying the pre-trained diagnosis and detection AI model to a new medical image to render a prediction as to the presence or absence of a disease within the new medical image; and outputting the prediction as a predictive medical diagnosis for a medical patient.
-
公开(公告)号:US20220309811A1
公开(公告)日:2022-09-29
申请号:US17676134
申请日:2022-02-19
Inventor: Fatemeh Haghighi , Mohammad Reza Hosseinzadeh Taher , Zongwei Zhou , Jianming Liang
IPC: G06V20/70 , G06V10/764 , G06V10/82 , G06V10/774 , G06V10/26 , G06V10/74
Abstract: Described herein are means for the generation of Transferable Visual Word (TransVW) models through self-supervised learning in the absence of manual labeling, in which the trained TransVW models are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured to perform self-supervised learning for an AI model in the absence of manually labeled input, by performing the following operations: receiving medical images as input; performing a self-discovery operation of anatomical patterns by building a set of the anatomical patterns from the medical images received at the system, performing a self-classification operation of the anatomical patterns; performing a self-restoration operation of the anatomical patterns within cropped and transformed 2D patches or 3D cubes derived from the medical images received at the system by recovering original anatomical patterns to learn different sets of visual representation; and providing a semantics-enriched pre-trained AI model having a trained encoder-decoder structure with skip connections in between based on the performance of the self-discovery operation, the self-classification operation, and the self-restoration operation. Other related embodiments are disclosed.
-
-
-
-
-
-
-
-
-