Previously improved color accuracy of a given digital camera was achieved by carefully designing the spectral transmittance of a color filter to be placed in front of the camera. Specifically, the filter is designed in a way that the spectral sensitivities of the camera after filtering are approximately linearly related to the color matching functions (or tristimulus values) of the human visual system. To avoid filters that absorbed too much light, the optimization could incorporate a minimum per wavelength transmittance constraint. In this paper, we change the optimization so that the overall filter transmittance is bounded, i.e. we solve for the filter that (for a uniform white light) transmits (say) 50% of the light. Experiments demonstrate that these filters continue to solve the color correction problem (they make cameras much more colorimetric). Significantly, the optimal filters by restraining the average transmittance can deliver a further 10% improvement in terms of color accuracy compared to the prior art of bounding the low transmittance.
Achieving color constancy is an important step to support visual tasks. In general, a linear transformation using a 3 × 3 illuminant modeling matrix is applied in the RGB color space of a camera to achieve color balance. Most of the studies for color constancy adopt this linear model, but the relationship of illumination and the camera spectral sensitivity (CSS) is only partially understood. Therefore, in this paper, we analyze linear combination of the illumination spectrum and the CSS using hyperspectral data that have much more information than RGB. After estimating the illumination correction matrix we elucidate the accuracy dependence on illumination spectrum and the camera sensor response, which can be applied to CSS.