Abstract:
Computer-readable storage media, computing devices and methods are discussed herein. In embodiments, a computing device may be configured to perform facial recognition based on gradient based feature extractions of images of faces. In embodiments, the computing device may be configured to determine directional matching patterns of the images from the gradient based feature extraction and may utilize these directional matching patterns in performing a facial recognition analysis of the images of faces. Other embodiments may be described and/or claimed.
Abstract:
Technologies for multi-scale object detection include a computing device including a multi-layer convolution network and a multi-scale region proposal network (RPN). The multi-layer convolution network generates a convolution map based on an input image. The multi-scale RPN includes multiple RPN layers, each with a different receptive field size. Each RPN layer generates region proposals based on the convolution map. The computing device may include a multi-scale object classifier that includes multiple region of interest (ROI) pooling layers and multiple associated fully connected (FC) layers. Each ROI pooling layer has a different output size, and each FC layer may be trained for an object scale based on the output size of the associated ROI pooling layer. Each ROI pooling layer may generate pooled ROIs based on the region proposals and each FC layer may generate object classification vectors based on the pooled ROIs. Other embodiments are described and claimed.
Abstract:
A mechanism is described for facilitating efficient free in-plane rotation landmark tracking of images on computing devices according to one embodiment. A method of embodiments, as described herein, includes detecting a first frame having a first image and a second frame having a second image, where the second image is rotated to a position away from the first image. The method may further include assigning a first parameter line and a second parameter line to the second image based on landmark positions associated with the first and second images, detecting a rotation angle between the first parameter line and the second parameter line, and rotating the second image back and forth within a distance associated with the rotation angle to verify positions of the first and second images.
Abstract:
Computer-readable storage media, computing devices and methods are discussed herein. In embodiments, a computing device may be configured to perform facial recognition based on gradient based feature extractions of images of faces. In embodiments, the computing device may be configured to determine directional matching patterns of the images from the gradient based feature extraction and may utilize these directional matching patterns in performing a facial recognition analysis of the images of faces. Other embodiments may be described and/or claimed.
Abstract:
Techniques are provided for facial recognition using decoy-based matching of facial image features. An example method may include comparing extracted facial features of an input image, provided for recognition, to facial features of each of one or more images in a gallery of known faces, to select a closest gallery image. The method may also include calculating a first distance between the input image and the selected gallery image. The method may further include comparing the facial features of the input image to facial features of each of one or more images in a set of decoy faces, to select a closest decoy image and calculating a second distance between the input image and the selected decoy image. The method may further include recognizing a match between the input image and the selected gallery image based on a comparison of the first distance and the second distance.
Abstract:
Apparatuses, methods and storage medium associated with 3D face model reconstruction are disclosed herein. In embodiments, an apparatus may include a facial landmark detector, a model fitter and a model tracker. The facial landmark detector may be configured to detect a plurality of landmarks of a face and their locations within each of a plurality of image frames. The model fitter may be configured to generate a 3D model of the face from a 3D model of a neutral face, in view of detected landmarks of the face and their locations within a first one of the plurality of image frames. The model tracker may be configured to maintain the 3D model to track the face in subsequent image frames, successively updating the 3D model in view of detected landmarks of the face and their locations within each of successive ones of the plurality of image frames. In embodiments, the facial landmark detector may include a face detector, an initial facial landmark detector, and one or more facial landmark detection linear regressors. Other embodiments may be described and/or claimed.
Abstract:
Technologies for multi-scale object detection include a computing device including a multi-layer convolution network and a multi-scale region proposal network (RPN). The multi-layer convolution network generates a convolution map based on an input image. The multi-scale RPN includes multiple RPN layers, each with a different receptive field size. Each RPN layer generates region proposals based on the convolution map. The computing device may include a multi-scale object classifier that includes multiple region of interest (ROI) pooling layers and multiple associated fully connected (FC) layers. Each ROI pooling layer has a different output size, and each FC layer may be trained for an object scale based on the output size of the associated ROI pooling layer. Each ROI pooling layer may generate pooled ROIs based on the region proposals and each FC layer may generate object classification vectors based on the pooled ROIs. Other embodiments are described and claimed.
Abstract:
Techniques are provided for facial recognition using decoy-based matching of facial image features. An example method may include comparing extracted facial features of an input image, provided for recognition, to facial features of each of one or more images in a gallery of known faces, to select a closest gallery image. The method may also include calculating a first distance between the input image and the selected gallery image. The method may further include comparing the facial features of the input image to facial features of each of one or more images in a set of decoy faces, to select a closest decoy image and calculating a second distance between the input image and the selected decoy image. The method may further include recognizing a match between the input image and the selected gallery image based on a comparison of the first distance and the second distance.
Abstract:
The present disclosure relates to detecting the location of a face feature point using an Adaboost learning algorithm. According to some embodiments, a method for detecting a location of a face feature point comprises: (a) a step of classifying a sub-window image into a first recommended feature point candidate image and a first non-recommended feature point candidate image using first feature patterns selected by an Adaboost learning algorithm, and generating first feature point candidate location information on the first recommended feature point candidate image; and (b) a step of re-classifying said sub-window image classified into said first non-recommended feature point candidate image, into a second recommended feature point candidate image and a second non-recommended feature point candidate image using second feature patterns selected by the Adaboost learning algorithm, and generating second feature point candidate location information on the second recommended feature point recommended candidate image.