-
公开(公告)号:US12260622B2
公开(公告)日:2025-03-25
申请号:US17625313
申请日:2020-07-17
Inventor: Zongwei Zhou , Vatsal Sodha , Md Mahfuzur Rahman Siddiquee , Ruibin Feng , Nima Tajbakhsh , Jianming Liang
IPC: G06V10/82 , G06V10/774 , G06V10/776 , G06V10/98
Abstract: Described herein are means for generating source models for transfer learning to application specific models used in the processing of medical imaging. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample in the group of training samples includes an image; for each training sample in the group of training samples: identifying an original patch of the image corresponding to the training sample; identifying one or more transformations to be applied to the original patch; generating a transformed patch by applying the one or more transformations to the identified patch; and training an encoder-decoder network using a group of transformed patches corresponding to the group of training samples, wherein the encoder-decoder network is trained to generate an approximation of the original patch from a corresponding transformed patch, and wherein the encoder-decoder network is trained to minimize a loss function that indicates a difference between the generated approximation of the original patch and the original patch. The source models significantly enhance the transfer learning performance for many medical imaging tasks including, but not limited to, disease/organ detection, classification, and segmentation. Other related embodiments are disclosed.
-
公开(公告)号:US11922628B2
公开(公告)日:2024-03-05
申请号:US17224886
申请日:2021-04-07
Inventor: Zongwei Zhou , Vatsal Sodha , Jiaxuan Pang , Jianming Liang
IPC: G06V10/00 , G06F18/21 , G06F18/214 , G06N3/045 , G06N3/088 , G06T3/00 , G06T7/11 , G06V10/26 , G06V10/77 , G16H30/40
CPC classification number: G06T7/11 , G06F18/2155 , G06F18/2163 , G06N3/045 , G06N3/088 , G06T3/00 , G06V10/26 , G06V10/7715 , G16H30/40 , G06V2201/03
Abstract: Described herein are means for generation of self-taught generic models, named Models Genesis, without requiring any manual labeling, in which the Models Genesis are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured for learning general-purpose image representations by recovering original sub-volumes of 3D input images from transformed 3D images. Such a system operates by cropping a sub-volume from each 3D input image; performing image transformations upon each of the sub-volumes cropped from the 3D input images to generate transformed sub-volumes; and training an encoder-decoder architecture with skip connections to learn a common image representation by restoring the original sub-volumes cropped from the 3D input images from the transformed sub-volumes generated via the image transformations. A pre-trained 3D generic model is thus provided, based on the trained encoder-decoder architecture having learned the common image representation which is capable of identifying anatomical patterns in never before seen 3D medical images having no labeling and no annotation. More importantly, the pre-trained generic models lead to improved performance in multiple target tasks, effective across diseases, organs, datasets, and modalities.
-
公开(公告)号:US20220262105A1
公开(公告)日:2022-08-18
申请号:US17625313
申请日:2020-07-17
Applicant: Zongwei ZHOU , Vatsal SODHA , Md, Mahfuzur RAHMAN SIDDIQUEE , Ruibin FENG , Nima TAJBAKHSH , Jianming LIANG , Arizona Board of Regents on behalf of Arizona State University
Inventor: Zongwei Zhou , Vatsal Sodha , Md Mahfuzur Rahman Siddiquee , Ruibin Feng , Nima Tajbakhsh , Jianming Liang
IPC: G06V10/774 , G06V10/82 , G06V10/98 , G06V10/776
Abstract: Described herein are means for generating source models for transfer learning to application specific models used in the processing of medical imaging. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample in the group of training samples includes an image; for each training sample in the group of training samples: identifying an original patch of the image corresponding to the training sample; identifying one or more transformations to be applied to the original patch; generating a transformed patch by applying the one or more transformations to the identified patch; and training an encoder-decoder network using a group of transformed patches corresponding to the group of training samples, wherein the encoder-decoder network is trained to generate an approximation of the original patch from a corresponding transformed patch, and wherein the encoder-decoder network is trained to minimize a loss function that indicates a difference between the generated approximation of the original patch and the original patch. The source models significantly enhance the transfer learning performance for many medical imaging tasks including, but not limited to, disease/organ detection, classification, and segmentation. Other related embodiments are disclosed.
-
公开(公告)号:US20210326653A1
公开(公告)日:2021-10-21
申请号:US17224886
申请日:2021-04-07
Inventor: Zongwei Zhou , Vatsal Sodha , Jiaxuan Pang , Jianming Liang
Abstract: Described herein are means for generation of self-taught generic models, named Models Genesis, without requiring any manual labeling, in which the Models Genesis are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured for learning general-purpose image representations by recovering original sub-volumes of 3D input images from transformed 3D images. Such a system operates by cropping a sub-volume from each 3D input image; performing image transformations upon each of the sub-volumes cropped from the 3D input images to generate transformed sub-volumes; and training an encoder-decoder architecture with skip connections to learn a common image representation by restoring the original sub-volumes cropped from the 3D input images from the transformed sub-volumes generated via the image transformations. A pre-trained 3D generic model is thus provided, based on the trained encoder-decoder architecture having learned the common image representation which is capable of identifying anatomical patterns in never before seen 3D medical images having no labeling and no annotation. More importantly, the pre-trained generic models lead to improved performance in multiple target tasks, effective across diseases, organs, datasets, and modalities.
-
-
-