SYSTEMS, METHODS, AND APPARATUSES FOR IMPLEMENTING A SELF-SUPERVISED CHEST X-RAY IMAGE ANALYSIS MACHINE-LEARNING MODEL UTILIZING TRANSFERABLE VISUAL WORDS

    公开(公告)号:US20210150710A1

    公开(公告)日:2021-05-20

    申请号:US17098422

    申请日:2020-11-15

    Abstract: Not only is annotating medical images tedious and time consuming, but it also demands costly, specialty-oriented expertise, which is not easily accessible. To address this challenge, a new self-supervised framework is introduced: TransVW (transferable visual words), exploiting the prowess of transfer learning with convolutional neural networks and the unsupervised nature of visual word extraction with bags of visual words, resulting in an annotation-efficient solution to medical image analysis. TransVW was evaluated using NIH ChestX-ray14 to demonstrate its annotation efficiency. When compared with training from scratch and ImageNet-based transfer learning, TransVW reduces the annotation efforts by 75% and 12%, respectively, in addition to significantly accelerating the convergence speed. More importantly, TransVW sets new records: achieving the best average AUC on all 14 diseases, the best individual AUC scores on 10 diseases, and the second best individual AUC scores on 3 diseases. This performance is unprecedented, because heretofore no self-supervised learning method has outperformed ImageNet-based transfer learning and no annotation reduction has been reported for self-supervised learning. These achievements are contributable to a simple yet powerful observation: The complex and recurring anatomical structures in medical images are natural visual words, which can be automatically extracted, serving as strong yet free supervision signals for CNNs to learn generalizable and transferable image representation via self-supervision.

    Systems, methods, and apparatuses for implementing a self-supervised chest x-ray image analysis machine-learning model utilizing transferable visual words

    公开(公告)号:US11436725B2

    公开(公告)日:2022-09-06

    申请号:US17098422

    申请日:2020-11-15

    Abstract: Not only is annotating medical images tedious and time consuming, but it also demands costly, specialty-oriented expertise, which is not easily accessible. To address this challenge, a new self-supervised framework is introduced: TransVW (transferable visual words), exploiting the prowess of transfer learning with convolutional neural networks and the unsupervised nature of visual word extraction with bags of visual words, resulting in an annotation-efficient solution to medical image analysis. TransVW was evaluated using NIH ChestX-ray14 to demonstrate its annotation efficiency. When compared with training from scratch and ImageNet-based transfer learning, TransVW reduces the annotation efforts by 75% and 12%, respectively, in addition to significantly accelerating the convergence speed. More importantly, TransVW sets new records: achieving the best average AUC on all 14 diseases, the best individual AUC scores on 10 diseases, and the second best individual AUC scores on 3 diseases. This performance is unprecedented, because heretofore no self-supervised learning method has outperformed ImageNet-based transfer learning and no annotation reduction has been reported for self-supervised learning. These achievements are contributable to a simple yet powerful observation: The complex and recurring anatomical structures in medical images are natural visual words, which can be automatically extracted, serving as strong yet free supervision signals for CNNs to learn generalizable and transferable image representation via self-supervision.

    SYSTEMS, METHODS, AND APPARATUSES FOR THE USE OF TRANSFERABLE VISUAL WORDS FOR AI MODELS THROUGH SELF-SUPERVISED LEARNING IN THE ABSENCE OF MANUAL LABELING FOR THE PROCESSING OF MEDICAL IMAGING

    公开(公告)号:US20210343014A1

    公开(公告)日:2021-11-04

    申请号:US17246032

    申请日:2021-04-30

    Abstract: Described herein are means for the generation of semantic genesis models through self-supervised learning in the absence of manual labeling, in which the trained semantic genesis models are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured with means for performing a self-discovery operation which crops 2D patches or crops 3D cubes from similar patient scans received at the system as input; means for transforming each anatomical pattern represented within the cropped 2D patches or the cropped 3D cubes to generate transformed 2D anatomical patterns or transformed 3D anatomical patterns; means for performing a self-classification operation of the transformed anatomical patterns by formulating a C-way multi-class classification task for representation learning; means for performing a self-restoration operation by recovering original anatomical patterns from the transformed 2D patches or transformed 3D cubes having transformed anatomical patterns embedded therein to learn different sets of visual representation; and means for providing a semantics-enriched pre-trained AI model having a trained encoder-decoder structure with skip connections in between based on the performance of the self-discovery operation, the self-classification operation, and the self-restoration operation. Other related embodiments are disclosed.

    SYSTEMS, METHODS, AND APPARATUSES FOR IMPLEMENTING SYSTEMATIC BENCHMARKING ANALYSIS TO IMPROVE TRANSFER LEARNING FOR MEDICAL IMAGE ANALYSIS

    公开(公告)号:US20230116897A1

    公开(公告)日:2023-04-13

    申请号:US17961896

    申请日:2022-10-07

    Abstract: Described herein are means for implementing systematic benchmarking analysis to improve transfer learning for medical image analysis. An exemplary system is configured with specialized instructions to cause the system to perform operations including: receiving training data having a plurality medical images therein; iteratively transforming a medical image from the training data into a transformed image by executing instructions for resizing and cropping each respective medical image from the training data to form a plurality of transformed images; applying data augmentation operations to the transformed images; applying segmentation operations to the augmented images; pre-training an AI model on different input images which are not included in the training data by executing self-supervised learning for the AI model; fine-tuning the pre-trained AI model to generate a pre-trained diagnosis and detection AI model; applying the pre-trained diagnosis and detection AI model to a new medical image to render a prediction as to the presence or absence of a disease within the new medical image; and outputting the prediction as a predictive medical diagnosis for a medical patient.

Patent Agency Ranking