Abstract:
This application relates to an image recognition technology in the field of computer vision of artificial intelligence, and provides an image classification method and apparatus. An example method includes obtaining an input feature map of a to-be-processed image, and then performing feature extraction processing on the input feature map based on a feature extraction kernel of a neural network to obtain an output feature map, where each of a plurality of output sub-feature maps is determined based on the corresponding input sub-feature map and the feature extraction kernel, at least one of the output sub-feature maps is determined based on a target matrix obtained after an absolute value is taken, and a difference between the target matrix and the input sub-feature map corresponding to the target matrix is the feature extraction kernel. The to-be-processed image is classified based on the output feature map to obtain a classification result of the to-be-processed image.
Abstract:
A neural network training method includes performing, in a forward propagation process, binarization processing on a target weight by using a binarization function, and using data obtained through the binarization processing as a weight of a first neural network layer in a neural network; and calculating, in a backward propagation process, a gradient of a loss function with respect to the target weight by using a gradient of a fitting function as a gradient of the binarization function.
Abstract:
A data feature extraction method and apparatus in the field of artificial intelligence are provided. An addition convolution operation is performed to extract a target feature in quantized data based on quantized feature extraction parameters, that is, to calculate a sum of absolute values of differences between the quantized feature extraction parameters and the quantized data, to obtain the target feature based on the sum. In addition, feature extraction parameters and data are quantized by using a same quantization parameter. According to this application, a storage resource is saved, a computing resource is saved, thereby reducing a limitation on an application of artificial intelligence to a resource-limited device. Further, when the extracted feature data is dequantized, the feature data may be dequantized based on the quantization parameters.
Abstract:
A video classification method and apparatus relate to the field of electronic and information technologies, so that precision of video classification can be improved. The method includes: segmenting a video in a sample video library according to a time sequence, to obtain a segmentation result, and generating a motion atom set; generating, by using the motion atom set and the segmentation result, a motion phrase set that can indicate a complex motion pattern, and generating a descriptive vector, based on the motion phrase set, of the video in the sample video library; and determining, by using the descriptive vector, a to-be-detected video whose category is the same as that of the video in the sample video library. The method is applicable to a scenario of video classification.
Abstract:
The present invention provides a method and an apparatus for determining an identity identifier of a face in a face image, and a terminal. The method includes: obtaining an original feature vector of a face image; selecting k candidate vectors from a face image database; selecting a matching vector of the original feature vector from the k candidate vectors; and determining, an identity identifier that is of the matching vector. In embodiments of the present invention, a face image database stores a medium-level feature vector formed by means of mutual interaction between a low-level face feature vector and autocorrelation and cross-correlation submatrices in a joint Bayesian probability matrix. The medium-level feature vector includes information about mutual interaction between the face feature vector and the autocorrelation and cross-correlation submatrices in the joint Bayesian probability matrix, so that efficiency and accuracy of facial recognition can be improved.
Abstract:
An image processing method and apparatus are disclosed. The method includes obtaining a two-dimensional target face image, receiving an identification curve marked by a user in the target face image, locating a facial contour curve of a face from the target face image according to the identification curve and by using an image segmentation technology, determining a three-dimensional posture and a feature point position of the face in the target face image, and constructing a three-dimensional shape of the face in the target face image according to the facial contour curve, the three-dimensional posture, and the feature point position of the face in the target face image by using a preset empirical model of a three-dimensional face shape and a target function matching the empirical model of the three-dimensional face shape. Using the method and apparatus, the complexity of three-dimensional face shape construction can be reduced.
Abstract:
A method and an apparatus for generating a facial feature verification model. The method includes acquiring N input facial images, performing feature extraction on the N input facial images, to obtain an original feature representation of each facial image, and forming a face sample library, for samples of each person with an independent identity, obtaining an intrinsic representation of each group of face samples in at least two groups of face samples, training a training sample set of the intrinsic representation, to obtain a Bayesian model of the intrinsic representation, and obtaining a facial feature verification model according to a preset model mapping relationship and the Bayesian model of the intrinsic representation. In the method and apparatus for generating a facial feature verification model in the embodiments of the present disclosure, complexity is low and a calculation amount is small.