Illuminant estimation is critically important in computational color constancy, which has attracted great attentions and motivated the development of various statistical- and learning-based methods. Past studies, however, seldom investigated the performance of the methods on pure color images (i.e., an image that is dominated by a single pure color), which are actually very common in daily life. In this paper, we develop a lightweight feature-based Deep Neural Network (DNN) model—Pure Color Constancy (PCC). The model uses four color features (i.e., chromaticity of the maximal, mean, the brightest, and darkest pixels) as the inputs and only contains less than 0.5k parameters. It only takes 0.25ms for processing an image and has good cross-sensor performance. The angular errors on three standard datasets are generally comparable to the state-of-the-art methods. More importantly, the model results in significantly smaller angular errors on the pure color images in PolyU Pure Color dataset, which was recently collected by us.
Without sunlight, imaging devices typically depend on various artificial light sources. However, scenes captured with the artificial light sources often violate the assumptions employed in color constancy algorithms. These violations of the scenes, such as non-uniformity or multiple light sources, could disturb the computer vision algorithms. In this paper, complex illumination of multiple artificial light sources is decomposed into each illumination by considering the sensor responses and the spectrum of the artificial light sources, and the fundamental color constancy algorithms (e.g., gray-world, gray-edge, etc.) are improved by employing the estimated illumination energy. The proposed method effectively improves the conventional methods, and the results of the proposed algorithms are demonstrated using the images captured under laboratory settings for measuring the accuracy of the color representation.