The similarity analysis is a major issue in computer vision. This concept is denoted by a scalar which designates a distance measure giving the resemblance of two objects. Specifically, this distance is used in many areas such as image compression, image matching, biometrics, shape recognition, objects recognition, manufacturing industry, data analysis, etc. Several studies have shown that the choice of similarity measures depends on the type of data. This paper presents an evaluation of some similarity measures in the literature and a proposed similarity function taking into account image feature. The features concerned are textures and key-points. The data used in this study came from multispectral imaging by using visible and thermal infrared images.
We have witnessed the huge evolution of face recognition technology from the first pioneering works to the current state-of-the-art highly accurate systems in the past few decades. The ability to resist spoofing attacks has not been addressed until recently. While a number of researchers has thrown themselves into the challenging mission of developing effective liveness detection methods against this kind of threat, the existing algorithms are usually affected by limitations such as light conditions, response speed and interactivity. In this paper, a novel and appealing approach is introduced based on the joint analysis of visible image and near-infrared image of faces, three different features (bright pupil, HOG in nose area, reflectance ratio) are extracted to form the final BPNGR feature vector. A SVM classifier with RBF kernel is trained to distinguish between genuine (live) and spoof faces. Experiment results on the self-collected database with 605 samples clearly demonstrate the superiority of our method over previous systems in terms of speed and accuracy.
Face recognition in real world environments is mainly affected by critical factors such as illumination variation, occlusion and small sample size. This paper proposes a robust preprocessing chain and robust feature extraction in order to handle these issues simultaneously. The proposed preprocessing chain exploits Difference of Gaussian (DoG) filtering as a bandpass filter to reduce the effects of aliasing, noise and shadows, and then exploits the gradient domain as an illumination insensitive measure. On the other hand, Linear Discriminant Analysis (LDA) is one of the most successful facial feature extraction techniques, but the recognition performance of LDA is dramatically decreased by the presence of occlusion and small sample size (SSS) problem. Therefore, it is necessary to develop a robust LDA algorithm in order to handle these cases. At this point, we propose to combine Robust Sparse Principal Component Analysis (RSPCA) and LDA (RSPCA+LDA). The RSPCA is performed first in order to reduce the dimension and to deal with outliers typically affecting sample images due to pixels that are corrupted by noise or occlusion. Then, LDA in low-dimensional subspaces can operate more effectively. Experimental results on three standard databases, namely, Extended Yale-B, AR and JAFFE confirm the effectiveness of the proposed method and the results are superior to well-known methods in the literature.
This paper presents multispectral imaging as an alternative to conventional color imaging that showed deficiencies. Thermal infrared images have useful signatures are insensitive to different illuminations and viewing directions. Multispectral imaging by the information fusion of the visible images and thermal infrared images provides rich data information that can be used in face recognition. Comparatively to traditional face recognition, multispectral imaging can separate illumination and reflectance information of facial images. The use of fusion of visible and thermal images in face recognition shows better performance than traditional imagery.
This paper presents an illumination invariant face recognition technique that uses a combination of local edge gradient information from two different neighboring pixel configurations to represent face images. The proposed Local Boosted Features (LBF) is an oriented local descriptor that is able to encode various patterns of face images under different lighting conditions. It employs the local edge response values in different directions and multi-region histograms from each neighborhood size. Then concatenate these histograms to get one long LBF-feature vector for each image. Finally, we use a library for support vector machines (LIBSVM) classifier to define the similarity between a test feature vector and all other candidate feature vectors. The performance evaluation of the proposed LBF algorithm is conducted on several publicly available face databases and observed improvements in the recognition accuracy.