Accurate pupil position detection and tracking of a driver plays an important role and is a prerequisite for augmented reality 3-dimensional head-up display (AR 3D HUD) systems of a vehicle. This paper aims to develop a robust, automated algorithm and system for detection and tracking
of pupil centers of a user using a single visual camera and near-infrared (NIR) leds under dynamic driving circumstances. Our proposed pupil tracker consists of eyenose detection, keypoint alignment, and tracking with NIR led on/off control depending on illumination conditions. Eye-nose detection,
which utilizes an error reinforcement learning method for selecting best learning DB, generates facial sub-region boxes including eyes and nose. The error reinforcement learning method uses only a small fraction (less than 5%) of the training DB, while improving the detection rate. Based on
the detected region, eyenose alignment including pupil centers are then processed by Supervised Descent Method (SDM) with Scale-invariant Feature Transform (SIFT). Then, the pupil centers are tracked with Support Vector Machine (SVM) classifier and SIFT feature based tracking checker, which
guarantees the aligned results contains pupil centers. This can be considered a different feature space and a strong classifier used in the eye-nose detection stage, such as Local Binary Pattern (LBP) and a set of weak classifiers (Adaboosting). However, the proposed pupil tracker cannot be
applied under low light conditions. To achieve this, we add NIR leds and S/W functions to control the intensity of the NIR. After recognizing the contents of the pupil image, such as low light conditions, wearing eyeglasses or sunglasses, corresponding designated aligners are applied. We achieve
fairly high detection rate (98%) and precise eye alignment (average error 2mm) even with challenging conditions, such as day and nighttime driving conditions.