We investigate the hypothesis, recently published in Nature, that the human visual system may use some sort of luminance-redness correlation together with the scene average for illuminant estimation. We found this idea interesting but not thoroughly tested. In particular, tests
on real images were limited to scenes made up artificially from hyperspectral data, spectral power distributions of various daylight illuminants, and the human cone sensitivity functions. The Ruderman database of hyperspectral images is also quite peculiar because it consists of a small number
of images of mostly foliage. Our experiments show that for scenes composed from a more diversified hyperspectral database combined with real illuminant spectra, the predicted correlation turns out to be very weak. For actual digital camera images, the luminance-redness correlation fails completely.