-
公开(公告)号:US20250046069A1
公开(公告)日:2025-02-06
申请号:US18715333
申请日:2022-11-30
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Bijie Bai , Hongda Wang
IPC: G06V10/82 , G06V10/143 , G06V20/69
Abstract: A deep learning-based virtual HER2 IHC staining method uses a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this staining framework was demonstrated by quantitative analysis of blindly graded HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs). A second quantitative blinded study revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts. This virtual staining framework bypasses the costly, laborious, and time-consuming IHC staining procedures in laboratory, and can be extended to other types of biomarkers to accelerate the IHC tissue staining and biomedical workflow.
-
公开(公告)号:US12190478B2
公开(公告)日:2025-01-07
申请号:US17530471
申请日:2021-11-19
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Hongda Wang , Harun Gunaydin , Kevin de Haan
IPC: G06T5/50 , G06N3/08 , G06T3/4046 , G06T3/4053 , G06T3/4076 , G06T5/70 , G06T5/73 , G06T5/92 , G06T7/00
Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
-
13.
公开(公告)号:US20240310782A1
公开(公告)日:2024-09-19
申请号:US18546095
申请日:2022-02-09
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Luzhe Huang , Tairan Liu
CPC classification number: G03H1/0866 , G03H1/0005 , G03H1/0443 , G06T5/50 , G06T5/60 , G03H2001/005 , G03H2001/0458 , G03H2001/0883 , G03H2210/55 , G06T2207/10056 , G06T2207/20084 , G06T2207/30024
Abstract: Digital holography is one of the most widely used label-free microscopy techniques in biomedical imaging. Recovery of the missing phase information of a hologram is an important step in holographic image reconstruction. A convolutional recurrent neural network (RNN)-based phase recovery approach is employed that uses multiple holograms, captured at different sample-to-sensor distances to rapidly reconstruct the phase and amplitude information of a sample, while also performing autofocusing through the same trained neural network. The success of this deep learning-enabled holography method is demonstrated by imaging microscopic features of human tissue samples and Papanicolaou (Pap) smears. These results constitute the first demonstration of the use of recurrent neural networks for holographic imaging and phase recovery, and compared with existing methods, the presented approach improves the reconstructed image quality, while also increasing the depth-of-field and inference speed.
-
14.
公开(公告)号:US20240135544A1
公开(公告)日:2024-04-25
申请号:US18543168
申请日:2023-12-18
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Hongda Wang , Zhensong Wei
IPC: G06T7/11 , G06F18/214 , G06N3/08 , G06V10/764 , G06V10/82 , G16H30/20 , G16H30/40 , G16H70/60
CPC classification number: G06T7/11 , G06F18/2155 , G06N3/08 , G06V10/764 , G06V10/82 , G16H30/20 , G16H30/40 , G16H70/60
Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample. This label-free digital staining method eliminates cumbersome and costly histochemical staining procedures and significantly simplifies tissue preparation in pathology and histology fields.
-
15.
公开(公告)号:US20230401447A1
公开(公告)日:2023-12-14
申请号:US18316474
申请日:2023-05-12
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Xing Lin , Deniz Mengu , Yi Luo
IPC: G06N3/082 , G02B5/18 , G02B27/42 , G06N3/04 , G06N3/08 , G06V10/94 , G06F18/214 , G06F18/2431
CPC classification number: G06N3/082 , G02B5/1866 , G02B27/4205 , G02B27/4277 , G06N3/04 , G06N3/08 , G06V10/95 , G06F18/214 , G06F18/2431
Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
-
公开(公告)号:US20230030424A1
公开(公告)日:2023-02-02
申请号:US17783260
申请日:2020-12-22
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Hongda Wang , Yilin Luo , Kevin de Haan , Yijie Zhang , Bijie Bai
Abstract: A deep learning-based digital/virtual staining method and system enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples. In one embodiment, the method of generates digitally/virtually-stained microscope images of label-free or unstained samples using fluorescence lifetime (FLIM) image(s) of the sample(s) using a fluorescence microscope. In another embodiment, a digital/virtual autofocusing method is provided that uses machine learning to generate a microscope image with improved focus using a trained, deep neural network. In another embodiment, a trained deep neural network generates digitally/virtually stained microscopic images of a label-free or unstained sample obtained with a microscope having multiple different stains. The multiple stains in the output image or sub-regions thereof are substantially equivalent to the corresponding microscopic images or image sub-regions of the same sample that has been histochemically stained.
-
公开(公告)号:US20190333199A1
公开(公告)日:2019-10-31
申请号:US16395674
申请日:2019-04-26
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Hongda Wang , Harun Gunaydin , Kevin de Haan
Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
-
公开(公告)号:US12300006B2
公开(公告)日:2025-05-13
申请号:US17783260
申请日:2020-12-22
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Hongda Wang , Yilin Luo , Kevin de Haan , Yijie Zhang , Bijie Bai
Abstract: A deep learning-based digital/virtual staining method and system enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples. In one embodiment, the method of generates digitally/virtually-stained microscope images of label-free or unstained samples using fluorescence lifetime (FLIM) image(s) of the sample(s) using a fluorescence microscope. In another embodiment, a digital/virtual autofocusing method is provided that uses machine learning to generate a microscope image with improved focus using a trained, deep neural network. In another embodiment, a trained deep neural network generates digitally/virtually stained microscopic images of a label-free or unstained sample obtained with a microscope having multiple different stains. The multiple stains in the output image or sub-regions thereof are substantially equivalent to the corresponding microscopic images or image sub-regions of the same sample that has been histochemically stained.
-
19.
公开(公告)号:US12020165B2
公开(公告)日:2024-06-25
申请号:US17294384
申请日:2019-11-14
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Yichen Wu
IPC: G06N3/084 , G02B21/00 , G02B21/36 , G03H1/00 , G03H1/26 , G06T5/70 , G06T7/50 , G06V10/764 , G06V10/82 , G06V20/69
CPC classification number: G06N3/084 , G02B21/0008 , G02B21/365 , G03H1/0005 , G03H1/268 , G06T5/70 , G06T7/50 , G06V10/764 , G06V10/82 , G06V20/69 , G03H2001/005 , G06T2207/10056 , G06T2207/10064 , G06T2207/20081 , G06T2207/20084
Abstract: A trained deep neural network transforms an image of a sample obtained with a holographic microscope to an image that substantially resembles a microscopy image obtained with a microscope having a different microscopy image modality. Examples of different imaging modalities include bright-field, fluorescence, and dark-field. For bright-field applications, deep learning brings bright-field microscopy contrast to holographic images of a sample, bridging the volumetric imaging capability of holography with the speckle-free and artifact-free image contrast of bright-field microscopy. Holographic microscopy images obtained with a holographic microscope are input into a trained deep neural network to perform cross-modality image transformation from a digitally back-propagated hologram corresponding to a particular depth within a sample volume into an image that substantially resembles a microscopy image of the sample obtained at the same particular depth with a microscope having the different microscopy image modality.
-
公开(公告)号:US11915360B2
公开(公告)日:2024-02-27
申请号:US17505553
申请日:2021-10-19
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Luzhe Huang
IPC: G06T15/08 , G06F18/214 , G06N3/08
CPC classification number: G06T15/08 , G06F18/214 , G06N3/08
Abstract: A deep learning-based volumetric image inference system and method are disclosed that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network (RNN) (referred to herein as Recurrent-MZ), 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to increase the depth-of-field of a 63×/1.4 NA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. The generalization of this recurrent network for 3D imaging is further demonstrated by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors.
-
-
-
-
-
-
-
-
-