A continuing goal of color science since the establishment of the trichromatic theory of color perception [e.g., 5, 6, 7] has been the accurate determination of the spectral sensitivities of the long-, middle- and short-wavelength-sensitive (L, M and S) cones—also known as the fundamental color matching functions (or CMFs): <overline>l</overline> (λ), <overline>m</overline> (λ) and <overline>s</overline> (λ). These CMFs are the physiological bases of all other CMFs. The cone fundamentals of Stockman and Sharpe [2], which are to be recommended by the CIE Technical Committee 1-36 as an international standard for colorimetry [12], rely on measurements made in both normal trichromats and color deficient observers. These measurements are used to guide the linear combinations of the Stiles & Burch [1] CMFs that define the cone fundamentals.
In this paper we describe a non-parametric probabilistic model that can be used to encode relationships in color naming datasets. This model can be used with datasets with any number of color terms and expressions, as well as terms from multiple languages. Because the model is based on probability theory, we can use classic statistics to compute features of interest to color scientists. In particular, we show that the uniqueness of a color name (color saliency) can be captured using the entropy of the probability distribution. We demonstrate this approach by applying this model to two different datasets: the multi-lingual World Color Survey (WCS), and a database collected via the web by Dolores Labs. We demonstrate how saliency clusters similarly named colors for both datasets, and compare our WCS results to those of Kay and his colleagues. We compare the two datasets to each other by converting them to a common colorspace (IPT).
During the last 20 years, there has been discussion within the color research community regarding the accuracy of the <overline>x</overline>(λ), <overline>y</overline>(λ), <overline>z</overline>(λ) human observer functions and the variations among observers. Some studies have indicated disagreement between numeric vs. visual metameric matches when comparing white light sources or comparing hard copy vs. soft proofs. This paper describes a method for systematically improving the human observer functions in response to new data from proposed experiments that can be reproduced at multiple locations.
Starting from measured scene luminances, we calculated the retinal luminance images of High Dynamic Range (HDR) test targets. These test displays contain 40 gray squares with a 50% average surround. In order to approximate a natural scene the surround area was made up of half-white and half-black squares of different sizes. In this way, the spatial-frequency distribution approximates a 1/f function of energy vs. spatial frequency. We compared images with 2.7 and 5.4 optical density ranges. Although the target luminances are very different, after computing the retinal image according to the CIE scatter glare formula, we have found that the retinal luminance ranges are very similar. Intraocular glare strongly restricts the range of the retinal image. Further, uniform, equiluminant target patches are spatially transformed to different gradients with unequal retinal luminances. The usable dynamic range of the display correlates with the range of retinal luminances. Observers report that appearances of white and black squares are constant and uniform, despite the fact that the retinal stimuli are variable and non-uniform. Human vision uses complex spatial processing to calculate appearance from retinal luminance arrays. Our spatial image processing increases apparent contrast with increased white area in the surround. Spatial vision counteracts glare. The spatial contrast mechanism is much more powerful when compared with retinal, rather than with target luminances. This study adds additional evidence that human vision uses spatial image processing to synthesize appearance, rather than using the array of independent retinal responses.
With the great demand from industry for a unified method for accurately computing the CIE tristimulus values and for the best agreement among the laboratories, CIE formed a new technical committee: TC1-71 on tristimulus integration during the 26th Session of the CIE in Beijing last year. This paper reports the current progress of the TC.
One of the goals of color appearance models is perceptual uniformity. In a perceptually uniform space, 1 ΔE difference between two colors would have similar magnitude visually regardless of the hue, chroma, or lightness of the colors being compared. In reality, there can be significant differences in perceived magnitude, resulting in modified ΔE metrics such as ΔE94 and ΔE2000. This paper proposes a method for improving existing expressions for CIELAB by modifying the matrix converting LMS->XYZ and the coefficients in CIELAB.
CIE 170-1:2006 provides a framework for considering variation between average viewers in cone fundamentals and corresponding color matching functions. CIE 170-1:2006 provides for variation as a function of age and angular field size for average viewers. In many situations the angular field size and average viewing age is known, and can therefore be utilized. Further, individual information with respect to color matching is not difficult to obtain. Individual information is likely to vary from the average, sometimes significantly, and even between the left and right eyes. There are various additional ways to minimize color variation, including using one or more broad spectrum emitters, such as broad-spectrum-white. If such wide-spectrum emitters are maximized, viewer variation is minimized. Numerous other methods utilizing more than three primaries can also reduce color variation, as well as sometimes offering expanded gamut. Using a variety of such readily-accessible techniques, color variation can be minimized, or at least reduced.
We show that conjoint analysis, a popular multi-attribute preference assessment technique used in market research, is a well suited tool to simultaneously evaluate a multitude of gamut mapping algorithms with a psycho-visual testing load not much higher than in conventional psycho-visual tests of gamut mapping algorithms. The gamut mapping algorithms that we test using conjoint analysis are derived from a master algorithm by choosing different parameter settings. Simultaneously we can also test the influence of additional parameters like gamut size on the perceived quality of a mapping. Conjoint analysis allows us to quantify the contribution of every single parameter value to the perceived quality.
Image compression schemes, such as JPEG and JPEG2000, degrade the quality of a reconstructed image due to their lossy characteristics. Among such degradation factors, color bleeding is particularly visible around colors between highly contrasting chrominance areas. This phenomenon is a result of the abrupt truncation of high-frequency components due to coarse quantization and subsampling of the chrominance channel, which appears as color smearing owing to spurious colored oscillations in the reconstructed image. Consequently, a change of color information, such as loss of the chrominance component and corresponding color smearing phenomenon, affects the gamut characteristic of the reconstructed image. Accordingly, this paper investigates the relationship between the compression ratio and the gamut area for a reconstructed image when using JPEG and JPEG2000. Eighteen color samples from the Macbeth ColorChecker are initially used to analyze the relationship between the compression ratio and the color bleeding phenomenon, i.e. the hue and chroma shifts in the a*b* color plane. When increasing the compression, color bleeding becomes apparent between adjacent colors samples, resulting in a loss of chroma in relation to the original color. However, some original colors exhibit a chroma increase due to spurious colored oscillations along the color sample boundaries. In addition, a hue shift appears along the direction connecting adjacent colors samples. Twelve natural color images, divided into two groups depending on four color attributes, are also used to investigate the relationship between the compression ratio and the variation in the gamut area. For each image group, the gamut area for the reconstructed image shows an overall tendency to increase when increasing the compression ratio, similar to the experimental results with the Macbeth ColorChecker samples. However, with a high compression ratio, the gamut area decreases due to the mixture of adjacent colors, resulting in more grey.
As digital still cameras become popular and accordingly their image quality becomes more of a concern, there are increasing studies in reducing the gap between human-observed scenes and images captured by digital still cameras. The dynamic range of a digital camera is narrower in contrast to the one of the scene, thus it is hard to recognize an object in the shadow region of a captured image. The Retinex algorithm is generally used to improve detail and local contrast of the shadow region in an image by dividing the image by its local average image, regarded as a local illuminant, using a Gaussian filter. The result by retinex algorithm depends on the scale of the Gaussian filter. The smaller the Gaussian filter, the more improved the local contrast, but brings the graying-out and halo artifact. Thus, to reduce those artifacts, a multi-scaled retinex algorithm was developed based on the weighted sum of several resulting images from the retinex algorithm by various-scaled Gaussian filters. However, if the chromatic distribution of the original image is not uniform and dominated by a certain chromaticity, the chromaticity of the local average image depends on the dominant chromaticity of the original image, thereby the colors of the resulting image divided by the local average image are shifted to a complement color to the dominant chromaticity of the original image. In this paper, a modified multi-scaled retinex method to reduce the influence of the dominant chromaticity in the image is proposed. For this, first, the local average images are divided by the average chromaticity values of the original image. And then, the local average images are multiplied by the chromaticity of the global illuminant, which is estimated by the averaged chromaticity values within the highlighted region, to consider its influence on the local illuminant. In addition, to compensate for the graying-out effect, the chroma value of the output image is enhanced based on that of the original image in the CIELAB space. Experimental results show that the proposed method improved the local contrast and detail without color distortion, thereby improving the color rendition.