Face recognition systems are used in high security applications for identification, authentication and authorization. Therefore they need to be robust not only to people wearing face accessories and masks like in the COVID19 pandemic, they also need to be robust against adversarial attacks. We have identified three inconspicuous facial areas to wear adversarial examples to attack face recognition. These are the mouth-nose section, the forehead and the eye area. In this paper, we will address the question of how much of a face needs to be present for successful identification and whether removing the critical regions is a viable countermeasure against adversarial examples.
Traditional image signal processors (ISPs) are primarily designed and optimized to improve the image quality perceived by humans. However, optimal perceptual image quality does not always translate into optimal performance for computer vision applications. In [1], Wu et al. proposed a set of methods, termed VisionISP, to enhance and optimize the ISP for computer vision purposes. The blocks in VisionISP are simple, content-aware, and trainable using existing machine learning methods. VisionISP significantly reduces the data transmission and power consumption requirements by reducing image bit-depth and resolution, while mitigating the loss of relevant information. In this paper, we show that VisionISP boosts the performance of subsequent computer vision algorithms in the context of multiple tasks, including object detection, face recognition, and stereo disparity estimation. The results demonstrate the benefits of VisionISP for a variety of computer vision applications, CNN model sizes, and benchmark datasets.
Biometric face recognition technology has received substantial attention in the past several years due to its potential for a wide variety of applications in both law enforcement and non-law enforcement fields. However, most current face recognition systems are designed for indoor and cooperative-user applications. Moreover, ambient lighting fluctuates greatly between days and among indoor and outdoor environments. Furthermore, illumination is the most significant factor affecting the appearance of faces. Most existing systems, academic and commercial, are compromised in accuracy by changes in environmental illumination. Furthermore, state-of-the-art techniques designed to combat this issue have very low accuracy. This paper attempts to combat the issue by proposing an illumination invariant near infrared face recognition architecture that consists of (1) generating a sequence of directional visibility images using quadrant and circular filters, (2) extracting Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features, and (3) performing SVM based classification. This technique a) improves the accuracy of the face recognition system, b) works under illumination variations, and c) does not need registration of face information. Furthermore, extensive computer simulations performed on the TUFTS (NIR) database and IIT Delhi NIR Face Database demonstrate that the proposed technique produces 94.52% and 80.41% respectively
This paper addresses the problem of face recognition using a graphical representation to identify structure that is common to pairs of images. Matching graphs are constructed where nodes correspond to local brightness gradient directions and edges are dependent on the relative orientation of the nodes. Similarity is determined from the size of maximal matching cliques in pattern pairs. The method uses a single reference face image to obtain recognition without a training stage. Results on samples from MegaFace obtain a 100% correct recognition result.
Color provides important information and features for face recognition. Different color spaces possess different characteristics and are suitable for different applications. In this paper, we propose to investigate how different color space components influence the performance of degraded face recognition. Towards this goal, nine color space components are selected for the evaluation. In addition, four different types of image-based degradations are applied to face image samples in order to discover the impact on the performance of face recognition. The experimental results show that, all selected color components have similar influence to the performance of face recognition system depend on the acquisition devices and the experimental setups.