-
公开(公告)号:US20230251189A1
公开(公告)日:2023-08-10
申请号:US18010207
申请日:2021-06-28
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Deniz Mengu , Yair Rivenson , Muhammed Veli
IPC: G01N21/3586 , G02B5/18
CPC classification number: G01N21/3586 , G02B5/1847
Abstract: A diffractive network is disclosed that utilizes, in some embodiments, diffractive elements, which are used to shape an arbitrary broadband pulse into a desired optical waveform, forming a compact and passive pulse engineering system. The diffractive network was experimentally shown to generate various different pulses by designing passive diffractive layers that collectively engineer the temporal waveform of an input terahertz pulse. The results constitute the first demonstration of direct pulse shaping in terahertz spectrum, where the amplitude and phase of the input wavelengths are independently controlled through a passive diffractive device, without the need for an external pump. Furthermore, a modular physical transfer learning approach is presented to illustrate pulse-width tunability by replacing part of an existing diffractive network with newly trained diffractive layers, demonstrating its modularity. This learning-based diffractive pulse engineering framework can find broad applications in e.g., communications, ultra-fast imaging and spectroscopy.
-
公开(公告)号:US20220206434A1
公开(公告)日:2022-06-30
申请号:US17604416
申请日:2020-04-21
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Tairan Liu , Yibo Zhang , Zhensong Wei
Abstract: A method for performing color image reconstruction of a single super-resolved holographic sample image includes obtaining a plurality of sub-pixel shifted lower resolution hologram images of the sample using an image sensor by simultaneous illumination at multiple color channels. Super-resolved hologram intensity images for each color channel are digitally generated based on the lower resolution hologram images. The super-resolved hologram intensity images for each color channel are back propagated to an object plane with image processing software to generate a real and imaginary input images of the sample for each color channel. A trained deep neural network is provided and is executed by image processing software using one or more processors of a computing device and configured to receive the real input image and the imaginary input image of the sample for each color channel and generate a color output image of the sample.
-
23.
公开(公告)号:US20220121940A1
公开(公告)日:2022-04-21
申请号:US17503312
申请日:2021-10-17
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Calvin Brown , Artem Goncharov , Zachary Ballard , Yair Rivenson
Abstract: A deep learning-based spectral analysis device and method are disclosed that employs a spectral encoder chip containing a plurality of nanohole array tiles, each with a unique geometry and, thus, a unique optical transmission spectrum. Illumination impinges upon the encoder chip and a CMOS image sensor captures the transmitted light, without any lenses, gratings, or other optical components. A spectral reconstruction neural network uses the transmitted intensities from the image to faithfully reconstruct the input spectrum. In one embodiment that used a spectral encoder chip with 252 nanohole array tiles, the network was trained on 50,352 spectra randomly generated by a supercontinuum laser and blindly tested on 14,648 unseen spectra. The system identified 96.86% of spectral peaks, with a peak localization error of 0.19 nm, peak height error of 7.60%, and peak bandwidth error of 0.18 nm.
-
公开(公告)号:US20220114711A1
公开(公告)日:2022-04-14
申请号:US17530471
申请日:2021-11-19
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Hongda Wang , Harun Gunaydin , Kevin de Haan
Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
-
25.
公开(公告)号:US20220012850A1
公开(公告)日:2022-01-13
申请号:US17294384
申请日:2019-11-14
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Yichen Wu
Abstract: A trained deep neural network transforms an image of a sample obtained with a holographic microscope to an image that substantially resembles a microscopy image obtained with a microscope having a different microscopy image modality. Examples of different imaging modalities include bright-field, fluorescence, and dark-field. For bright-field applications, deep learning brings bright-field microscopy contrast to holographic images of a sample, bridging the volumetric imaging capability of holography with the speckle-free and artifact-free image contrast of bright-field microscopy. Holographic microscopy images obtained with a holographic microscope are input into a trained deep neural network to perform cross-modality image transformation from a digitally back-propagated hologram corresponding to a particular depth within a sample volume into an image that substantially resembles a microscopy image of the sample obtained at the same particular depth with a microscope having the different microscopy image modality.
-
公开(公告)号:US20210264214A1
公开(公告)日:2021-08-26
申请号:US17261542
申请日:2019-03-29
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Zhensong Wei
Abstract: A deep learning-based digital staining method and system are disclosed that provides a label-free approach to create a virtually-stained microscopic images from quantitative phase images (QPI) of label-free samples. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses a convolutional neural network trained using a generative adversarial network model to transform QPI images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample. This label-free digital staining method eliminates cumbersome and costly histochemical staining procedures, and would significantly simplify tissue preparation in pathology and histology fields.
-
27.
公开(公告)号:US20190294108A1
公开(公告)日:2019-09-26
申请号:US16359609
申请日:2019-03-20
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Yichen Wu , Yibo Zhang , Harun Gunaydin
Abstract: A method of performing phase retrieval and holographic image reconstruction of an imaged sample includes obtaining a single hologram intensity image of the sample using an imaging device. The single hologram intensity image is back-propagated to generate a real input image and an imaginary input image of the sample with image processing software, wherein the real input image and the imaginary input image contain twin-image and/or interference-related artifacts. A trained deep neural network is provided that is executed by the image processing software using one or more processors and configured to receive the real input image and the imaginary input image of the sample and generate an output real image and an output imaginary image in which the twin-image and/or interference-related artifacts are substantially suppressed or eliminated. In some embodiments, the trained deep neural network simultaneously achieves phase-recovery and auto-focusing significantly extending the DOF of holographic image reconstruction.
-
公开(公告)号:US12270068B2
公开(公告)日:2025-04-08
申请号:US17793926
申请日:2021-01-27
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Hongda Wang , Hatice Ceylan Koydemir , Yunzhe Qiu
IPC: C12Q1/04 , G02B21/26 , G02B21/36 , G03H1/00 , G06V10/10 , G06V10/82 , G06V20/69 , H04N23/56 , H04N23/698
Abstract: A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.
-
公开(公告)号:US20240290473A1
公开(公告)日:2024-08-29
申请号:US18572113
申请日:2022-06-29
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA , UNITED STATES GOVERNMENT AS REPRESENTED BY THE DEPARTMENT OF VETERANS AFFAIRS
Inventor: Aydogan Ozcan , Jingxi Li , Yair Rivenson , Xiaoran Zhang , Philip O. Scumpia , Jason Garfinkel , Gennady Rubinstein
CPC classification number: G16H30/40 , A61B5/0068 , A61B5/0071 , G06T7/0012 , G06T15/08 , G06T2207/20084 , G06T2207/30088
Abstract: A deep learning-based system and method is provided that uses a convolutional neural network to rapidly transform in vivo reflectance confocal microscopy (RCM) images of unstained skin into virtually-stained hematoxylin and eosin-like images with microscopic resolution, enabling visualization of epidermis, dermal-epidermal junction, and superficial dermis layers. The network is trained using ex vivo RCM images of excised unstained tissue and microscopic images of the same tissue labeled with acetic acid nuclear contrast staining as the ground truth. The trained neural network can be used to rapidly perform virtual histology of in vivo, label-free RCM images of normal skin structure, basal cell carcinoma and melanocytic nevi with pigmented melanocytes, demonstrating similar histological features of traditional histology from the same excised tissue. The system and method enables more rapid diagnosis of malignant skin neoplasms and reduces invasive skin biopsies.
-
公开(公告)号:US20240288701A1
公开(公告)日:2024-08-29
申请号:US18571653
申请日:2022-06-29
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yi Luo , Ege Cetintas , Yair Rivenson
CPC classification number: G02B27/0944 , G02B5/1866 , G02B6/4206
Abstract: A computer-free system and method is disclosed that uses an all-optical image reconstruction method to see through random diffusers at the speed of light. Using deep learning, a set of transmissive layers are trained to all-optically reconstruct images of arbitrary objects that are distorted by random phase diffusers. After the training stage, the resulting diffractive layers are fabricated and form a diffractive optical network that is physically positioned between the unknown object and the image plane to all-optically reconstruct the object pattern through an unknown, new phase diffuser. Unlike digital methods, all-optical diffractive reconstructions do not require power except for the illumination light. This diffractive solution to see through diffusive and/or scattering media can be extended to other wavelengths, and can fuel various applications in biomedical imaging, astronomy, atmospheric sciences, oceanography, security, robotics, autonomous vehicles, among many others.
-
-
-
-
-
-
-
-
-