Abstract:
The present disclosure relates to methods for estimating an accuracy and robustness of a model and devices thereof. According to an embodiment of the present disclosure, the method comprises calculating a parameter representing a possibility that a sample in the first dataset appears in the second dataset; calculating an accuracy score of the model with respect to the sample in the first dataset; calculating a weighted accuracy score of the model with respect to the sample in the first dataset, based on the accuracy score, by taking the parameter as a weight; and calculating, as the estimation accuracy of the model with respect to the second dataset, an adjusted accuracy of the model with respect to the first dataset according to the weighted accuracy score.
Abstract:
A method and apparatus for removing black eyepits and sunglasses in first actual scenario data having an image containing a face acquired from an actual scenario, to obtain second actual scenario data; counting a proportion of wearing glasses in the second actual scenario data; dividing original training data composed of an image containing a face into wearing-glasses and not-wearing-glasses first and second training data, where a proportion of wearing glasses in the original training data is lower than a proportion in the second actual scenario data; generating wearing-glasses third training data based on glasses data and the second training data; generating fourth training data in which a proportion of wearing glasses is equal to the proportion of wearing glasses in the second actual scenario data, based on the third training data and the original training data; and training a face recognition model based on the fourth training data.
Abstract:
A method for removing a mark in a document image includes: extracting connected components from a binary image corresponding to the document image; clustering the connected components based on grayscale features of the connected components to obtain one clustering center; searching, within numerical ranges of a clustering radius R and a grayscale threshold T, for a combination (R, T) which causes an evaluation value based on the grayscale features of the connected components to be higher than a first evaluation threshold; and removing the mark in the document image based on the grayscale threshold in the combination. The method and an apparatus according to the invention can remove a mark in a document image effectively and accurately.
Abstract:
An image processing device includes: an inputting unit for performing a click on an object image contained in an image to obtain a clicked point; a calculating unit for calculating an edge map of the image; an estimating unit for estimating a color model of the object image based on the clicked point and the edge map; an object classifying unit for classifying each pixel in the image, based on the edge map and the color model, so as to obtain a binary image of the image; and a detecting unit for detecting a region containing the object image based on the binary image. The image processing device and method according to the present disclosure can improve the accuracy of detecting the boundary of an object image such as a finger image, thus facilitating removal of the object image from the image and making the processed image more nice-looking.
Abstract:
An apparatus and a method for extracting a background luminance map of an image, and a de-shading apparatus and method. The apparatus includes: a luminance extracting unit, configured to extract luminance values everywhere in the image to obtain a luminance map; a separating unit, configured to separate background from foreground of the image based on the luminance map, to obtain an initial background luminance map; a top and bottom luminance obtaining unit, configured to extract top and bottom luminance of the initial background luminance map, and in the case of a part of the top and/or bottom luminance being missing, to supplement the missing part utilizing existing data of the top and/or bottom luminance to obtain complete top and bottom luminance; and an interpolation unit, configured to perform interpolation on the whole image based on the complete top and bottom luminance, to obtain the background luminance map of the image.
Abstract:
The present invention relates to a method and apparatus for processing a scanned image. The method for processing a scanned image comprises: a shaded region extracting step of extracting a region, which is shaded by a shading object and lies in a margin in the vicinity of an edge of the scanned image, as a shaded region and a pixel value repairing step of repairing values of pixels, which lie both in a line segment and the shaded region, by using a linear model according to known values of pixels, which lie both in the line segment and the margin, the line segment passing through the shaded region and being parallel to the edge.
Abstract:
Provided are device and method for determining a Convolutional Neural Network (CNN) model. The device for determining the CNN model includes: a first determination unit configured to determine complexity of a database including multiple samples; a second determination unit configured to determine a classification capability of a CNN model applicable to the database based on the complexity of the database; a third determination unit configured to acquire classification capability of each candidate CNN model; and a matching unit configured to determine the CNN model applicable to the database based on the classification capability of each candidate CNN model. With the device and method for determining the CNN module, a design process of CNN model can be simplified.
Abstract:
Disclosed are a method and apparatus for training a classification model and a method and apparatus for classifying. A method for classifying comprises: extracting a feature from to-be-tested information inputted to a classification model having been trained; compressing the extracted feature into a low dimensional hidden feature capable of representing the to-be-tested information; performing decompression on the hidden feature to obtain a decompressed feature; performing rebuilding on the to-be-tested information based on the decompressed feature, to obtain reconstructed to-be-tested information; judging, based on a rebuild loss between the to-be-tested information and the reconstructed to-be-tested information, whether the to-be-tested information belongs to a known class or an unknown class; and performing classification on the to-be-tested information, via the classification model having been trained, in a case where it is determined that the to-be-tested information belongs to a known class.
Abstract:
A method and apparatus of open set recognition, and a computer-readable storage medium are disclosed. The method comprises acquiring auxiliary data and training data of known categories for open set recognition, training a neural network alternately using the auxiliary data and the training data, until convergence; extracting a feature of data to be recognized for open set recognition, using the trained neural network; and recognizing a category of data to be recognized, based on the feature of the data to be recognized.
Abstract:
A recognition apparatus based on a deep neural network, a training apparatus and methods thereof. The deep neural network is obtained by inputting training samples comprising positive samples and negative samples into an input layer of the deep neural network and training. The apparatus includes: a judging unit configured to judge that a sample to be recognized is a suspected abnormal sample when confidences of positive sample classes in a classification result outputted by an output layer of the deep neural network are all less than a predefined threshold value. Hence, reliability of a confidence of a classification result outputted by the deep neural network may be efficiently improved.