Abstract:
A person's region is detected from input video of a surveillance camera; a person's direction in the person's region is determined; the separability of person's clothes is determined to generate clothing segment separation information; furthermore, clothing features representing visual features of person's clothes in the person's region are extracted in consideration of the person's direction and the clothing segment separation information. The person's direction is determined based on a person's face direction, person's motion, and clothing symmetry. The clothing segment separation information is generated based on analysis information regarding a geometrical shape of the person's region and visual segment information representing person's clothing segments which are visible based on the person's region and background prior information. A person is searched out based on a result of matching between a clothing query text, representing a type and a color of person's clothes, and the extracted person's clothing features.
Abstract:
A method is provided for classifying pixels in an image into super pixels that may be executed in one or more than one electronic devices. Seed pixels are pixels selected from the image. Color distances of a color space similar to human vision perception between the seed pixels and proximal pixels are calculated. The proximal pixels are pixels located proximally to corresponding seed pixels. Geographic distances are also computed for two pixels and the geographic distances are combined with the color distances as a reference for classifying pixels into super pixels.
Abstract:
An image processing apparatus comprises an edge detector configured to create first image data including information of an edge part of an image obtained by picking up an object by an image pickup device, a frequency analyzer configured to create second image data by dividing the image into each frequency band, and an output unit configured to output distance information from the image pickup device to the object of the image, based on the first image data and the second image data.
Abstract:
A method for use with a stream of images defining a video. The method includes the steps of periodically conducting a face finding operation on an image in the stream. In respect to the last image in the stream preceding the image in which one or more faces was found, a tracker based upon wavelet decomposition is used to find a face for each face found in the last image for which no counterpart was found in the image.
Abstract:
A method is provided for classifying pixels in an image into super pixels that may be executed in one or more than one electronic devices. Seed pixels are pixels selected from the image. Color distances of a color space similar to human vision perception between the seed pixels and proximal pixels are calculated. The proximal pixels are pixels located proximally to corresponding seed pixels. Geographic distances are also computed for two pixels and the geographic distances are combined with the color distances as a reference for classifying pixels into super pixels.
Abstract:
A method is provided for recognition of a ceiling portion, a vertical object portion and a ground portion in an image of indoor scene executed in an electronic system. The image is divided into a plurality of pixel sets. Expected values of each pixel sets with a ceiling distribution function, a vertical object distribution function and a ground distribution function are calculated. The expected values of each pixel set in the ceiling distribution function, the vertical object distribution function and the ground distribution function are compared to determine whether each pixel set belongs to a ceiling object, a vertical object or a ground object.
Abstract:
A process and system to provide damage identification and assessment of damage to a geographic area may include acquiring imagery data of a geographic area, processing the imagery data using wavelet transformation to identify damage to the geographic area and outputting a map showing damage condition of the geographic area. Processing the imagery data may use wavelet transformation that outputs wavelet transformation images. Damage categories for at least one location in the imagery data may be provided using discriminant analysis applied to the wavelet transformation images. The outputted maps and damage categories may be used to assess damage to areas affected by catastrophic-like events such as, e.g., hurricanes, floods, earthquakes, tornadoes and the like. This process is faster and may be more accurate than current assessment techniques thereby permitting quick responses to catastrophic-like events.
Abstract:
Technologies are generally presented for employing enhanced expectation maximization (EEM) in image retrieval and authentication. Using uniform distribution as initial condition, the EEM may converge iteratively to a global optimality. If a realization of the uniform distribution is used as the initial condition, the process may also be repeatable. In some examples, a positive perturbation scheme may be used to avoid boundary overflow, often occurring with the conventional EM algorithms. To reduce computation time and resource consumption, a histogram of one dimensional Gaussian Mixture Model (GMM) with two components and wavelet decomposition of an image may be employed.
Abstract:
Accurate localization of isolated particles is important in single particle based super-resolution microscopy. It allows the imaging of biological samples with nanometer-scale resolution using a simple fluorescence microscopy setup. Nevertheless, conventional techniques for localizing single particles can take minutes to hours of computation time because they require up to a million localizations to form an image. In contrast, the present particle localization techniques use wavelet-based image decomposition and image segmentation to achieve nanometer-scale resolution in two dimensions within seconds to minutes. This two-dimensional localization can be augmented with localization in a third dimension based on a fit to the imaging system's point-spread function (PSF), which may be asymmetric along the optical axis. For an astigmatic imaging system, the PSF is an ellipse whose eccentricity and orientation varies along the optical axis. When implemented with a mix of CPU/GPU processing, the present techniques are fast enough to localize single particles while imaging (in real-time).
Abstract:
A digital filter bank having a number J≧1 of stages is disclosed. For each integer j such that 1≦j≦J, the j-th stage includes a plurality of filtering units (20, 21) each receiving an input signal of the j-th stage. These filtering units include a low-pass filtering unit (20) using real filtering coefficients and at least one band-pass filtering unit (21) using complex filtering coefficients. Following each band-pass filtering unit of the j-th stage, a respective modulus processing unit (25) generates a processed real signal as a function of squared moduli of complex output values of the band-pass filtering unit. The input signal of the first stage is a digital signal supplied to the digital filter bank, while for 1