I will first discuss the importance of an inner product on the space of spectra. Since “Cohen's Matrix R” depends on this inner product for its definition, the choice of inner product affects the structure of fundamental space (and thus color space) in a nontrivial way. The choice has to be made on considerations of physics and physiology. Usually this goes unnoticed and a choice is made implicitly. With the metric of the inner product one proceeds to construct a distortion free color space that reflects the metric of the space of spectra veridically.Next I will discuss the structure of the Schroedinger object color solid. Although Ostwald's original constructions are attractive because firmly founded on colorimetric principles (whereas e.g., Munsell's atlas is not based on colorimetry but on “eye measure”), they are marred by a number of unfortunate flaws: Not all object colors can be represented, the structure depends on the spectrum (not just the color) of the illuminant, and the “Principle of Internal Symmetry” used to mensurate the color circle is flawed because the locus of “full colors” is not a planar, but a twisted space curve. I show how to amend these flaws in a principled manner.When Ostwald's mensuration of the color circle (which result from purely colorimetric calculations) is compared to eye measure results (e.g., Munsell's) I find that (perhaps surprisingly, because colorimetry concerns only judgments of equality) they correspond closely.
Using the World Wide Web to order goods and services is a rapidly increasing activity. Experience with mail order catalogues has shown that failure of goods to match the catalogue color is a major category of complaint. Ordering colored goods on the web poses even greater challenges. Useful devices to standardise cathode-ray tubes, and other self-luminous displays, are available, but these only address the problem of display set-up. Other relevant problems include the effects of changes in the level of luminance, lack of color constancy of goods with changes in illuminant color, departures from a set of color-matching functions of the spectral sensitivities of acquisition devices, observer metamerism, the effect of the surround on the appearance of the display, and limitations in the gamut and in the resolution of typical displays. The extent to which these factors are likely to be important is discussed, and ways in which some of their effects might be mitigated are considered.
An isolated light has a color appearance specified reasonably well by its wavelength, but the same light within a complex image can appear a quite different hue. How does the context of an image affect the appearance of an embedded light? A classical approach is to aggregate light from throughout the image to determine an equivalent uniform background that has the same effect as the complex stimulus. Several models have been proposed to determine this ‘equivalent background’, including simple averaging of light, spatial weighting, and nonlinear neural responses. The main point of this paper is that none of these models can succeed because variation within a complex image is a fundamental property of it. Studies show that color perception depends on a chromatic-contrast gain-control mechanism, which is regulated by chromatic variation over a large area. Any uniform field has no variation, of course, so cannot mimic the color shifts caused by a complex image.
For a fixed illuminant and observer there is a whole set of reflectances resulting in an identical response, these reflectances are called metamers. It can be shown analytically that all reflectances in each such set must intersect at least three times.There is a large amount of literature arguing about the properties of these sets, in particular about the position and number of nodes of intersections. The results in the literature, based on relatively small data sets, vary in particular as a consequence of different methods used for generating metamers.Using a new method based on statistical information from measured sets, metamer sets are generated. These infinite metamer sets are then studied for their inner structure in terms of cross-over behavior. The results presented here confirm the result of there being three major wavelengths of intersection. These are around 450nm, 540nm and 610nm.
In simplest terms, brightness is the appearance of luminance and lightness is the appearance of objects. The experiments in this paper measure the appearance of three visible faces of a real cube in real-life illumination. Three faces of the cube are painted white and the other three are painted different shades of gray. When the observer sees three white faces the experiment measures the appearance of illumination. When the experimenter rotates the cube to make visible a face with a different reflectance in the same illumination, then the experiment measures the appearance of objects.The results of matching experiments show that humans make the same match for luminance changes caused by illumination as those caused by reflectance. Humans can successfully recognize changes in whites due to illumination. They mistakenly interpret reflectance changes as illuminant position changes. However, in the same image they make the same matches for dark areas that were caused by illumination, reflectance or both.
A large-scale psychophysical experiment was performed examining the effects of various simultaneous variations of image parameters on perceived image sharpness. The goal of this experiment was to unlock some of the rules of image sharpness perception. A paired comparison paradigm was used to compare images of different resolution, contrast, noise, and sharpening. In total, 50 people performed over 140,000 observations. The results indicate that there are several very interesting trade-offs between the various parameters of contrast, noise, resolution, and spatial sharpening. An interval scale of image sharpness was created. This scale was then used to test the results of several existing models of color and spatial vision. The ultimate goal of this experiment, along with the visual modeling is to obtain a mathematical model of perceived image quality.
We report preliminary results of an experiment measuring contrast sensitivity as a function of spatial frequency, location in color space and direction of variation. Observers viewed bipartite fields containing sinusoidal gratings on one side and a uniform field on the other, having the same mean color. The experiment used a forced choice paradigm to measure thresholds for a large number of observers and a large number of mean color/spatial frequency/direction of variation combinations, although no one observer saw every combination. While early data is noisy, we have found contrast sensitivity as a function of spatial frequency, on average, when any of L*, a*, b*, C* or Hab is varied. We have not yet found any meaningful dependency on any independent variable other than spatial frequency, such as C*, as would be expected from such color difference metrics as CIE ΔE94 or ΔECMC.
An experiment was carried out to evaluate the perceptibility and the acceptability of colour differences between pairs of CRT images. Four images were used. For each image, a series of images were systematically rendered following four directions: lightness, chroma, mixed lightness and chroma, and hue using different functions. The effects of rendering were assessed by a panel of observers. The results were used to test the performance of different colour difference equations. In addition, the perceptibility threshold and the acceptability tolerance for each formula were determined.
The Bradford chromatic adaptation transform, empirically derived by Lam, models illumination change. Specifically, it provides a means of mapping XYZs under a reference light source to XYZs for a target light source such that the corresponding XYZs produce the same perceived color.One implication of the Bradford chromatic adaptation transform is that color correction for illumination takes place not in cone space but rather in a ‘narrowed’ cone space. The Bradford sensors have their sensitivity more narrowly concentrated than the cones. However, Bradford sensors are not optimally narrow. Indeed, recent work has shown that it is possible to sharpen sensors to a much greater extent than Bradford.The focus of this paper is comparing the perceptual error between actual appearance and predicted appearance of a color under different illuminants, since it is perceptual error that the Bradford transform minimizes. Lam's original experiments are revisited and perceptual performance of the Bradford transform and linearized Bradford transform is compared with that of a new adaptation transform that is based on sharp sensors. Perceptual errors in CIELAB ΔE, ΔECIE94, and ΔECMC(1:1) are calculated for several corresponding color data sets and analyzed for their statistical significance. The results are found to be similar for the two transforms, with Bradford performing slightly better depending on the data set and color difference metric used. The sharp transform performs equally well as the linearized Bradford transform: there is no statistically significant difference in performance for most data sets.