In this article we show the change in paradigm occurred in color constancy algorithms: from a pre-processing step in image understanding, to the exploitation of image understanding and computer vision results and techniques. Since color constancy is an ill-posed problem, we give an overview of the assumptions on which classical color constancy algorithms are based in order to solve it. Then, we chronologically review the color constancy algorithms that exploit results and techniques borrowed from the image understanding research field in order to exploit assumptions that could be met in a larger number of images.
Color Constancy has two hypothetical mechanisms: Chromatic Adaptation and Spatial Comparisons. These mechanisms have different fundamental properties. Adaptation models work with small individual scene segments. They combine radiance measurements of individual segments with the modeler’s selected parameters that scale the receptor’s cone quanta catches. Alternatively, spatial models use the radiance map of the entire field of view to calculate appearances of all image segments simultaneously. They achieve independence from spectral shifts in illumination by making spatial comparisons within each L, M, S color channel. These spatial comparisons respond to color crosstalk caused by the overlap of spectral sensitivities of cone visual pigments. L, M, and S cones respond to every visible wavelength. Crosstalk causes the spatial comparisons of cone responses to vary with changes in spectral illumination. Color Constancy works best in spatially uniform, and variable spectral illumination. Measurements of Color Constancy show systematic departures from perfect constancy. These limits of Color Constancy are predicted by spatial comparisons with cone Crosstalk. These limits do not correlate with Chromatic Adaptation models. This paper describes cone Crosstalk, and reviews a series of measurements of the limits of Color Constancy in variable spectral, spatial and real-life illuminations.
Achieving color constancy is an important step to support visual tasks. In general, a linear transformation using a 3 × 3 illuminant modeling matrix is applied in the RGB color space of a camera to achieve color balance. Most of the studies for color constancy adopt this linear model, but the relationship of illumination and the camera spectral sensitivity (CSS) is only partially understood. Therefore, in this paper, we analyze linear combination of the illumination spectrum and the CSS using hyperspectral data that have much more information than RGB. After estimating the illumination correction matrix we elucidate the accuracy dependence on illumination spectrum and the camera sensor response, which can be applied to CSS.