Abstract:
An artificial neural network system for image classification, including multiple independent individual convolutional neural networks (CNNs) connected in multiple stages, each CNN configured to process an input image to calculate a pixelwise classification. The output of an earlier stage CNN, which is a class score image having identical height and width as its input image and a depth of N representing the probabilities of each pixel of the input image belonging to each of N classes, is input into the next stage CNN as input image. When training the network system, the first stage CNN is trained using first training images and corresponding label data; then second training images are forward propagated by the trained first stage CNN to generate corresponding class score images, which are used along with label data corresponding to the second training images to train the second stage CNN.
Abstract:
A method and system are disclosed for recognizing an object, the method including emitting one or more arranged patterns of infrared rays (IR) from an infrared emitter towards a projection region, the one or more arranged patterns of infrared rays forming unique dot patterns; mapping the one or more arranged patterns of infrared rays on the operation region to generate a reference image; capturing an IR image and a RGB image of an object with a wearable device, the wearable device including an infrared (IR) camera and a RGB camera; extracting IR dots from the IR image and determining a match between the extracted IR dots and the reference image; determining a position of the RGB image on the reference image; and mapping the position of the RGB image to a coordinate on the projection region.
Abstract:
Pathological analysis needs instance-level labeling on a histologic image with high accurate boundaries required. To this end, embodiments of the present invention provide a deep model that employs the DeepLab basis and the multi-layer deconvolution network basis in a unified model. The model is a deeply supervised network that allows to represent multi-scale and multi-level features. It achieved segmentation on the benchmark dataset at a level of accuracy which is significantly beyond all top ranking methods in the 2015 MICCAI Gland Segmentation Challenge. Moreover, the overall performance of the model surpasses the state-of-the-art Deep Multi-channel Neural Networks published most recently, and the model is structurally much simpler, more computational efficient and weight-lighted to learn.
Abstract:
A method, a computer readable medium, and a system are disclosed for cell segmentation. The method including generating a binary mask from an input image of a plurality of cells, wherein the binary mask separates foreground cells from a background; classifying each of the cell regions of the binary mask into single cell regions, small cluster regions, and large cluster regions; performing, on each of the small cluster regions, a segmentation based on a contour shape of the small cluster region; performing, on each of the large cluster regions, a segmentation based on a texture in the large cluster regions; and outputting an image with cell boundaries.
Abstract:
An artificial neural network system for image classification, formed of multiple independent individual convolutional neural networks (CNNs), each CNN being configured to process an input image patch to calculate a classification for the center pixel of the patch. The multiple CNNs have different receptive field of views for processing image patches of different sizes centered at the same pixel. A final classification for the center pixel is calculated by combining the classification results from the multiple CNNs. An image patch generator is provided to generate the multiple input image patches of different sizes by cropping them from the original input image. The multiple CNNs have similar configurations, and when training the artificial neural network system, one CNN is trained first, and the learned parameters are transferred to another CNN as initial parameters and the other CNN is further trained. The classification includes three classes, namely background, foreground, and edge.
Abstract:
A method, computer readable medium, and system are disclosed of enhancing cell images for analysis. The method includes performing a multi-thresholding process on a cell image to generate a plurality of images of the cell image; smoothing each component within each of the plurality of images; merging the smoothed components into a merger layer; classifying each of the components of the merged layer into convex cell regions and concave cell regions; combining the concave cell regions with a cell boundary for each of the corresponding concave cell regions to generate a smoothed shape profile for each of the concave cell regions; and generating an output image by combining the convex cell regions with the concave cell regions with smoothed shape profiles.