-
31.
公开(公告)号:US11946854B2
公开(公告)日:2024-04-02
申请号:US17418782
申请日:2019-12-23
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Yichen Wu
IPC: G01N15/14 , G01N21/64 , G06F18/214 , G06N3/08 , G06T3/40 , G06T3/4046 , G06T3/4053 , G06T5/00 , G06T5/50 , G06V10/44 , G06V10/77 , G06V10/82 , G06V20/69
CPC classification number: G01N15/1475 , G01N21/6458 , G06F18/214 , G06N3/08 , G06T3/4046 , G06T3/4053 , G06T5/003 , G06T5/50 , G06V10/454 , G06V10/7715 , G06V10/82 , G06V20/69 , G06T2207/10016 , G06T2207/10056 , G06T2207/10064 , G06T2207/20081 , G06T2207/20084 , G06T2207/20221
Abstract: A fluorescence microscopy method includes a trained deep neural network. At least one 2D fluorescence microscopy image of a sample is input to the trained deep neural network, wherein the input image(s) is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image. The trained deep neural network outputs fluorescence output image(s) of the sample that is digitally propagated or refocused to the user-defined surface or automatically generated. The method and system cross-connects different imaging modalities, permitting 3D propagation of wide-field fluorescence image(s) to match confocal microscopy images at different sample planes. The method may be used to output a time sequence of images (e.g., time-lapse video) of a 2D or 3D surface within a sample.
-
32.
公开(公告)号:US11893739B2
公开(公告)日:2024-02-06
申请号:US17041447
申请日:2019-03-29
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Hongda Wang , Zhensong Wei
IPC: G06T7/11 , G16H70/60 , G16H30/20 , G16H30/40 , G06N3/08 , G06F18/214 , G06V10/764 , G06V10/82
CPC classification number: G06T7/11 , G06F18/2155 , G06N3/08 , G06V10/764 , G06V10/82 , G16H30/20 , G16H30/40 , G16H70/60
Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample. This label-free digital staining method eliminates cumbersome and costly histochemical staining procedures and significantly simplifies tissue preparation in pathology and histology fields.
-
公开(公告)号:US20230401436A1
公开(公告)日:2023-12-14
申请号:US18249726
申请日:2021-10-22
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Deniz Mengu , Yair Rivenson
Abstract: A method of forming an optical neural network for processing an input object image or optical signal that is invariant to object transformations includes training a software-based neural network model to perform one or more specific optical functions for a multi-layer optical network having physical features located in each of the layers of the optical neural network. The training includes feeding different input object images or optical signals that have random transformations or shifts and computing at least one optical output of optical transmission and/or reflection through the optical neural network using an optical wave propagation model and iteratively adjusting transmission/reflection coefficients for each layer until optimized transmission/reflection coefficients are obtained. A physical embodiment of the optical neural network is then made that has a plurality of substrate layers having physical features that match the optimized transmission/reflection coefficients obtained by the trained neural network model.
-
公开(公告)号:US20230153600A1
公开(公告)日:2023-05-18
申请号:US17920774
申请日:2021-05-04
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Jingxi Li , Deniz Mengu , Yair Rivenson
CPC classification number: G06N3/067 , G02B27/4272
Abstract: A machine vision task, machine learning task, and/or classification of objects is performed using a diffractive optical neural network device. Light from objects passes through or reflects off the diffractive optical neural network device formed by multiple substrate layers. The diffractive optical neural network device defines a trained function between an input optical signal from the object light illuminated at a plurality or a continuum of wavelengths and an output optical signal corresponding to one or more unique wavelengths or sets of wavelengths assigned to represent distinct data classes or object types/classes created by optical diffraction and/or reflection through/off the substrate layers. Output light is captured with detector(s) that generate a signal or data that comprise the one or more unique wavelengths or sets of wavelengths assigned to represent distinct data classes or object types or object classes which are used to perform the task or classification.
-
35.
公开(公告)号:US11514325B2
公开(公告)日:2022-11-29
申请号:US16359609
申请日:2019-03-20
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Yichen Wu , Yibo Zhang , Harun Gunaydin
Abstract: A method of performing phase retrieval and holographic image reconstruction of an imaged sample includes obtaining a single hologram intensity image of the sample using an imaging device. The single hologram intensity image is back-propagated to generate a real input image and an imaginary input image of the sample with image processing software, wherein the real input image and the imaginary input image contain twin-image and/or interference-related artifacts. A trained deep neural network is provided that is executed by the image processing software using one or more processors and configured to receive the real input image and the imaginary input image of the sample and generate an output real image and an output imaginary image in which the twin-image and/or interference-related artifacts are substantially suppressed or eliminated. In some embodiments, the trained deep neural network simultaneously achieves phase-recovery and auto-focusing significantly extending the DOF of holographic image reconstruction.
-
36.
公开(公告)号:US20220366253A1
公开(公告)日:2022-11-17
申请号:US17843720
申请日:2022-06-17
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Xing Lin , Deniz Mengu , Yi Luo
Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
-
公开(公告)号:US20220253685A1
公开(公告)日:2022-08-11
申请号:US17629346
申请日:2020-09-09
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yi Luo , Deniz Mengu , Yair Rivenson
Abstract: A broadband diffractive optical neural network simultaneously processes a continuum of wavelengths generated by a temporally-incoherent broadband source to all-optically perform a specific task learned using network learning. The optical neural network design was verified by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tunable, single passband as well as dual passband spectral filters, and (2) spatially-controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep learning-based design, broadband diffractive optical neural networks help engineer light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning. The optical neural network may be implemented as a reflective optical neural network.
-
38.
公开(公告)号:US20220058776A1
公开(公告)日:2022-02-24
申请号:US17418782
申请日:2019-12-23
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Yichen Wu
Abstract: A fluorescence microscopy method includes a trained deep neural network. At least one 2D fluorescence microscopy image of a sample is input to the trained deep neural network, wherein the input image(s) is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image. The trained deep neural network outputs fluorescence output image(s) of the sample that is digitally propagated or refocused to the user-defined surface or automatically generated. The method and system cross-connects different imaging modalities, permitting 3D propagation of wide-field fluorescence image(s) to match confocal microscopy images at different sample planes, The method may be used to output a time sequence of images (e.g., time-lapse video) of a 2D or 3D surface within a sample.
-
39.
公开(公告)号:US20210043331A1
公开(公告)日:2021-02-11
申请号:US17041447
申请日:2019-03-29
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Hongda Wang , Zhensong Wei
Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample. This label-free digital staining method eliminates cumbersome and costly histochemical staining procedures and significantly simplifies tissue preparation in pathology and histology fields.
-
-
-
-
-
-
-
-