Abstract:
A three-dimensional (3D) scene is computationally reconstructed using a combination of plural modeling techniques. Point clouds representing an object in the 3D scene are generated by different modeled techniques and each point is encoded with a confidence value which reflects a degree of accuracy in describing the surface of the object in the 3D scene based on strengths and weaknesses of each modeling technique. The point clouds are merged in which a point for each location on the object is selected according to the modeling technique that provides the highest confidence.
Abstract:
There is provided a computerized system and method of generating a unique identification associated with a gemstone, usable for unique identification of the gemstone. The method comprises: obtaining one or more images of the gemstone, the one or more images captured at one or more viewing angles relative to the gemstone and to a light pattern, thus giving rise to a representative group of images; processing the representative group of images to generate a set of rotation-invariant values informative of rotational cross-correlation relationship characterizing the images in the representative group; and using the generated set of rotation-invariant values to generate a unique identification associated with the gemstone. The unique identification associated with the gemstone can be further compared with an independently generated unique identification associated with the gemstone in question, or with a class- indicative unique identification.
Abstract:
Barcode tag conditions on sample tubes are detected utilizing side view images of sample tubes for streamlining handling in clinical laboratory automation systems. The condition of the tags may be classified into classes, each divided into a list of additional subcategories that cover individual characteristics of the tag quality. According to an embodiment, a tube characterization station (TCS) is utilized to obtain the side view images. The TCS enables the simultaneous or near-simultaneous collection of three images for each tube, resulting in a 360 degree side view for each tube. The method is based on a supervised scene understanding concept, resulting in an explanation of each pixel into its semantic meaning. Two parallel low-level cues for condition recognition, in combination with a tube model extraction cue, may be utilized. The semantic scene information is then integrated into a mid-level representation for final decision making into one of the condition classes.
Abstract:
A system and method for applying an ensemble of segmentations to a tissue sample at a blob level and at an image level to determine if the tissue sample is representative of cancerous tissue. The ensemble of segmentations at the image level is used to accept or reject images based upon the segmentation quality of the images and both the blob level segmentation and the image level segmentation are used to calculate a mean nuclear volume to discriminate between cancer and normal classes of tissue samples.
Abstract:
Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.
Abstract:
In the present invention are a method and apparatus for processing a block to be processed of a urine sediment image. The method comprises: dividing a block to be processed into a plurality of grids; calculating an n-dimensional local feature vector of each grid of the plurality of grids, where n is a positive integer; in the block to be processed, merging at least two adjacent grids of the plurality of grids into an intermediate block; calculating an intermediate block merging feature vector of the intermediate block; according to a predetermined combination rule, combining the intermediate block merging feature vectors obtained for different intermediate blocks of the block to be processed into a general combination feature vector of the block to be processed; and by way of taking the general combination feature vector as a feature in a feature set of block processing, processing the block to be processed.
Abstract:
Disclosed are a method for verifying facial data and a system thereof. The method comprises a step of retrieving a plurality of source-domain datasets from a first database and a target-domain dataset from a second database different from the first database; a step of determining a latent subspace matching with target-domain dataset best, and a posterior distribution for the determined latent subspace from the target-domain dataset and the source-domain datasets; a step of determining information shared between the target-domain data and the source-domain datasets; and a step of establishing a Multi-Task learning model from the posterior distribution P, and the shared information M on the target-domain dataset and the source-domain datasets.
Abstract:
The invention relates to a device for identifying a person, comprising a real-image camera (10) for capturing a reflection image of a face (14) of the person in the visible spectral range, a thermal imaging camera (12) for capturing an infrared emissions image of the face of the person, a real-image analysis (16) and an infrared image analysis (18). The real-image camera is designed to transmit the reflection image in the form of reflection image data (20) to the real-image analysis. The thermal imaging camera is designed to transmit the infrared emissions image in the form of infrared image data (22) to the infrared image analysis. The real-image analysis is designed to determine first biometric features (24) of the face from the reflection image data, to receive a reference image (30) comprising a personal identification (36), to determine second biometric features (32) from said reference image and, if the first and second biometric features coincide to a defined degree, to make the personal identification available to an output control (42). The infrared image analysis is designed to detect a temperature distribution (34) from the infrared image data and to determine whether said temperature distribution at least partially approaches a temperature distribution in the face of a living person, and if this is the case, to instruct the output control to output the personal identification.