METHODS, SYSTEMS, AND MEDIA FOR SEGMENTING IMAGES

    公开(公告)号:US20200380695A1

    公开(公告)日:2020-12-03

    申请号:US16885579

    申请日:2020-05-28

    Abstract: Methods, systems, and media for segmenting images are provided. In some embodiments, the method comprises: generating an aggregate U-Net comprised of a plurality of U-Nets, wherein each U-Net in the plurality of U-Nets has a different depth, wherein each U-Net is comprised of a plurality of nodes Xi,j, wherein i indicates a down-sampling layer the U-Net, and wherein j indicates a convolution layer of the U-Net; training the aggregate U-Net by: for each training sample in a group of training samples, calculating, for each node in the plurality of nodes Xi,j, a feature map xi,j, wherein xi,j is based on a convolution operation performed on a down-sampling of an output from Xi−1,j when j=0, and wherein xi,j is based on a convolution operation performed on an up-sampling operation of an output from Xi+1,j−1 when j>0; and predicting a segmentation of a test image using the trained aggregate U-Net.

    Systems, methods, and apparatuses for actively and continually fine-tuning convolutional neural networks to reduce annotation requirements

    公开(公告)号:US12216737B2

    公开(公告)日:2025-02-04

    申请号:US17698805

    申请日:2022-03-18

    Abstract: Described herein are systems, methods, and apparatuses for actively and continually fine-tuning convolutional neural networks to reduce annotation requirements, in which the trained networks are then utilized in the context of medical imaging. The success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as ImageNet and Places. However, it is tedious, laborious, and time consuming to create large annotated datasets, and demands costly, specialty-oriented skills. A novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework is presented to dramatically reduce annotation cost, starting with a pre-trained CNN to seek “worthy” samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning. The described method was evaluated using three distinct medical imaging applications, demonstrating that it can reduce annotation efforts by at least half compared with random selection.

    SYSTEMS, METHODS, AND APPARATUSES FOR ACCRUING AND REUSING KNOWLEDGE (ARK) FOR SUPERIOR AND ROBUST PERFORMANCE BY A TRAINED AI MODEL FOR USE WITH MEDICAL IMAGE CLASSIFICATION

    公开(公告)号:US20240339200A1

    公开(公告)日:2024-10-10

    申请号:US18627831

    申请日:2024-04-05

    CPC classification number: G16H30/40 G06V10/764 G16H30/20

    Abstract: Exemplary systems include means for receiving medical image data at the system from a plurality of datasets provided via publicly available sources; evaluating the medical image data for the presence of expert notation embedded within the medical image data; determining the expert notations embedded within the medical image data are formatted using inconsistent and heterogeneous labeling across the plurality of datasets; generating an interim AI model by applying a task head classifier to learn the annotations of the expert notations embedded within the medical image data to generate an interim AI model; scaling the interim AI model having the learned annotations of the expert notations embedded therein to additional tasks by applying multi-task heads using cyclical pre-training of the interim AI model trained previously to generate task-specific AI models, with each respective task-specific AI model having differently configured task-specific learning objectives; training a pre-trained AI model specially configured for an application-specific target task by applying task re-visitation training forcing the pre-trained AI model being trained to re-visit all tasks in each round of training and forcing the pre-trained AI model being trained to re-use all accrued knowledge to improve learning by the pre-trained AI model being trained against the current application-specific target task for which the pre-trained AI model is being trained.

    SYSTEMS, METHODS, AND APPARATUSES FOR SYSTEMATICALLY DETERMINING AN OPTIMAL APPROACH FOR THE COMPUTER-AIDED DIAGNOSIS OF A PULMONARY EMBOLISM

    公开(公告)号:US20230081305A1

    公开(公告)日:2023-03-16

    申请号:US17944881

    申请日:2022-09-14

    Abstract: Described herein are means for systematically determining an optimal approach for the computer-aided diagnosis of a pulmonary embolism, in the context of processing medical imaging. According to a particular embodiment, there is a system specially configured for diagnosing a Pulmonary Embolism (PE) within new medical images which form no part of the dataset upon which the AI model was trained. Such a system executes operations for receiving a plurality of medical images and processing the plurality of medical images by executing an image-level classification algorithm to determine the presence or absence of a Pulmonary Embolism (PE) within each image via operations including: pre-training an AI model through supervised learning to identify ground truth; fine-tuning the pre-trained AI model specifically for PE diagnosis to generate a pre-trained PE diagnosis and detection AI model; wherein the pre-trained AI model is based on a modified CNN architecture having introduced therein a squeeze and excitation (SE) block enabling the CNN architecture to extract informative features from the plurality of medical images by fusing spatial and channel-wise information; applying the pre-trained PE diagnosis and detection AI model to new medical images to render a prediction as to the presence or absence of the Pulmonary Embolism within the new medical images; and outputting the prediction as a PE diagnosis for a medical patient.

    Systems, methods, and apparatuses for implementing a self-supervised chest x-ray image analysis machine-learning model utilizing transferable visual words

    公开(公告)号:US11436725B2

    公开(公告)日:2022-09-06

    申请号:US17098422

    申请日:2020-11-15

    Abstract: Not only is annotating medical images tedious and time consuming, but it also demands costly, specialty-oriented expertise, which is not easily accessible. To address this challenge, a new self-supervised framework is introduced: TransVW (transferable visual words), exploiting the prowess of transfer learning with convolutional neural networks and the unsupervised nature of visual word extraction with bags of visual words, resulting in an annotation-efficient solution to medical image analysis. TransVW was evaluated using NIH ChestX-ray14 to demonstrate its annotation efficiency. When compared with training from scratch and ImageNet-based transfer learning, TransVW reduces the annotation efforts by 75% and 12%, respectively, in addition to significantly accelerating the convergence speed. More importantly, TransVW sets new records: achieving the best average AUC on all 14 diseases, the best individual AUC scores on 10 diseases, and the second best individual AUC scores on 3 diseases. This performance is unprecedented, because heretofore no self-supervised learning method has outperformed ImageNet-based transfer learning and no annotation reduction has been reported for self-supervised learning. These achievements are contributable to a simple yet powerful observation: The complex and recurring anatomical structures in medical images are natural visual words, which can be automatically extracted, serving as strong yet free supervision signals for CNNs to learn generalizable and transferable image representation via self-supervision.

Patent Agency Ranking