DIFFRACTIVE DEEP NEURAL NETWORKS WITH DIFFERENTIAL AND CLASS-SPECIFIC DETECTION

    公开(公告)号:US20220327371A1

    公开(公告)日:2022-10-13

    申请号:US17616983

    申请日:2020-06-05

    Abstract: A diffractive optical neural network device includes a plurality of diffractive substrate layers arranged in an optical path. The substrate layers are formed with physical features across surfaces thereof that collectively define a trained mapping function between an optical input and an optical output. A plurality of groups of optical sensors are configured to sense and detect the optical output, wherein each group of optical sensors has at least one optical sensor configured to capture a positive signal from the optical output and at least one optical sensor configured to capture a negative signal from the optical output. Circuitry and/or computer software receives signals or data from the optical sensors and identifies a group of optical sensors in which a normalized differential signal calculated from the positive and negative optical sensors within each group is the largest or the smallest of among all the groups.

    Systems and methods for deep learning microscopy

    公开(公告)号:US11222415B2

    公开(公告)日:2022-01-11

    申请号:US16395674

    申请日:2019-04-26

    Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.

    DEVICES AND METHODS EMPLOYING OPTICAL-BASED MACHINE LEARNING USING DIFFRACTIVE DEEP NEURAL NETWORKS

    公开(公告)号:US20210142170A1

    公开(公告)日:2021-05-13

    申请号:US17046293

    申请日:2019-04-12

    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.

    MISALIGNMENT-RESILIENT DIFFRACTIVE OPTICAL NEURAL NETWORKS

    公开(公告)号:US20230162016A1

    公开(公告)日:2023-05-25

    申请号:US17920778

    申请日:2021-05-21

    CPC classification number: G06N3/067

    Abstract: A diffractive optical neural network includes one more layers that are resilient to misalignments, fabrication-related errors, detector noise, and/or other sources of error. A diffractive optical neural network model is first trained with a computing device to perform a statistical inference task such as image classification (e.g., object classification). The model is trained using images or training optical signals along with random misalignments of the plurality of layers, fabrication-related errors, input plane or output plane misalignments, and/or detector noise, followed by computing an optical output of the diffractive optical neural network model through optical transmission and/or reflection resulting from the diffractive optical neural network and iteratively adjusting complex-valued transmission and/or reflection coefficients for each layer until optimized transmission/reflection coefficients are obtained. Once the model is optimized, the physical embodiment of the diffractive optical neural network is manufactured.

    SINGLE-SHOT AUTOFOCUSING OF MICROSCOPY IMAGES USING DEEP LEARNING

    公开(公告)号:US20230085827A1

    公开(公告)日:2023-03-23

    申请号:US17908864

    申请日:2021-03-18

    Abstract: A deep learning-based offline autofocusing method and system is disclosed herein, termed a Deep-R trained neural network, that is trained to rapidly and blindly autofocus a single-shot microscopy image of a sample or specimen that is acquired at an arbitrary out-of-focus plane. The efficacy of Deep-R is illustrated using various tissue sections that were imaged using fluorescence and brightfield microscopy modalities and demonstrate single snapshot autofocusing under different scenarios, such as a uniform axial defocus as well as a sample tilt within the field-of-view. Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods. This deep learning-based blind autofocusing framework opens up new opportunities for rapid microscopic imaging of large sample areas, also reducing the photon dose on the sample.

    VOLUMETRIC MICROSCOPY METHODS AND SYSTEMS USING RECURRENT NEURAL NETWORKS

    公开(公告)号:US20220122313A1

    公开(公告)日:2022-04-21

    申请号:US17505553

    申请日:2021-10-19

    Abstract: A deep learning-based volumetric image inference system and method are disclosed that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network (RNN) (referred to herein as Recurrent-MZ), 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to increase the depth-of-field of a 63×/1.4 NA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. The generalization of this recurrent network for 3D imaging is further demonstrated by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors.

Patent Agency Ranking