We suggest a framework for predicting color appearance of image data. The image data are converted by a series of standardizing transformations into a simpler stimulus with the same color appearance. By working through these standardizing transformations, we can build color appearance models for images without abandoning—or repeating—much of the work that has gone into defining the visual properties of the CIE standard observer.
The RLAB color space has been tested for a variety of viewing conditions and stimulus types. These tests have shown that RLAB performs well for complex stimuli and not-so-well for simple stimuli. This paper reviews the various psychophysical results, interprets their differences, and describes evolutionary enhancements to the RLAB model that simplify it while improving its performance.
It is concluded from presented experimental and theoretical consideration that the notion of “light source” is inadequate for description of the color perception and an alternative way to define and to describe the illumination is suggested. Color constancy as phenomena of perceiving the same color of curved and homogeneously painted surfaces is considered and it is shown that surface colors can be adequately represented by 3×3 matrices. There discussed a hypothesis that the dimension of the color space (the colorimetric space) is determined by the dimension of the ordinary geometric space of the visual scene and that the number of types of cones should be considered only as an implementation parameter. A straightforward solution of the image irradiance equation based on the concept of surface color and the above description of illumination is suggested and discussed.Why is color an epiphenomenon? This question is answered in connection with the main question of what kind of vital tasks require explicitly perceived color.
As computers and color monitors are becoming cheaper and more ubiquitous, the need to understand color perception on soft-display and how it relates to color perception on hardcopy becomes crucial. While the demand for “what you see in soft-display is what you get in hardcopy” is mounting, many hurdles are yet to be overcome. The difficulties are a result of the differences in their color gamuts, viewing conditions, nature of lumination (emissive vs reflective), color reproduction method (additive vs subtractive), etc. Due to these intrinsic differences, the problem of softcopy/hardcopy matching in a general sense could not be adequately tackled without answering some of the more fundamental questions of color perception in the two different media of interest.On the other hand, even if the general problem of softcopy/hardcopy matching may not be completely resolved in the near future, a partial solution will still be very valuable if it could be established that results of certain psychophysical experiments done in soft-display under controlled viewing conditions could be translated to similar experiments done in hardcopy under its normal viewing conditions. Two advantages are the time and cost savings in setting up the experiments. Psychophysical experiments involving images or color patches in hardcopy often take months to set up due to the difficulty of generating the desired samples. In comparison, the soft-display environment, once it is calibrated, is more stable and results in faster and cheaper setup for similar experiments.Another advantage of conducting experiments in soft-display is that the subjects only have to use the keyboard and/or mouse instead of physically manipulating the test objects. This again results in noticeable savings in time. As a by-product, data entries are done by the subjects during the experiments thus eliminating the process of key-punching and related scribal errors. Furthermore, other interesting statistics such as keying sequence and timing data can be collected for later analysis if necessary.With all the above mentioned incentives, an experiment was conducted to address one of these cross-media questions, namely, color difference perception in hardcopy vs soft-display. While similar experiments have been done, this one concentrates on color difference with a delta E of around five to ten. More specifically, this color difference experiment is done on soft-display mimicking that done by Sayer and Skipper on photographic reproductions to compare the results of the two methods.
The early beginnings of color in displays gave birth to a new industry, an industry dedicated to the accurate characterization of the photometric and colorimetric properties of these displays. Today this industry is international in scope totaling millions of dollars in sales. Coupled with this instrumentation are measurement techniques, guides and standards for their use. Both national and international organizations are preparing these standards backed by the respective national standards institutes. The development of both the instrumentation and the measurement techniques will be traced from their early beginnings up to the present fully automated systems. A brief review of sources for the guides, test methods and standards in use and in preparation will also be presented.
This paper reviews the sampling of color spectra and its effect on the accuracy of derived properties such as CIE tristimulus values and color rendering indices. The details of numerical computation are considered; the errors and their sources are discussed.
Color correction of images for a non-linear device uses its characterization function, often evaluated by rectilinear interpolation of a table of measurements. Sequential linear interpolation (SLI) instead allows more freely distributed grid points, but usually requires remeasure-ment to place them optimally. We smooth the measured data with a tensor-product spline before using a fast SLI look-up table: noise is reduced, and the spline curvature reveals choice SLI grid locations without remeasurement.
With the advent of vector-space approaches, linear estimation techniques can be used in various ways for different imaging scenarios. We compare two methods for constructing linear models for surface reflectance spectra, the Principal-Components Analysis (PCA) and the One-Mode Analysis (OMA) applied in simulating image capture under a number of realistic lighting conditions. We demonstrate that successfully using such methods depends on the exact problem at hand.
A method is proposed for recovering both the surface-spectral reflectance function and the illuminant spectral-power distribution based on the dichromatic reflection model. A multi-band imaging device is realized by combining a monochrome CCD camera and several color filters, to separate the incident light into six wavelength bands throughout the visible spectrum. We show a measuring method to predict the spectral sensitivity functions of the imaging system. The dichromatic reflection for inhomogeneous objects is described using finite-dimensional models. The estimation algorithms are presented to recover the scene parameters in two steps from the image data. The illuminant spectrum of a projector lamp is recovered fairly well, and the Machbeth color patches are estimated in the average accuracy of about ΔEab = 3.5.
A desktop scanner was colorimetrically characterized to average CIELAB error of less than unity for Kodak Ektachrome transparencies and Ektacolor paper, and Fuji Photo Film Fujichrome transparencies and Fujicolor paper. Independent verification on spectrally similar materials yielded average ΔE*ab error of less than 2.1. The technique first modeled the image formation of each medium using either Beer-Bouguer or Kubelka-Munk theories. Scanner digital values were then empirically related to dye concentrations. From these estimated dye concentrations, either spectral transmittance or spectral reflectance factor was calculated from an a priori spectral analysis of each medium. The spectral estimates can be used to calculate tristimulus values for any illuminant and observer of interest.