We studied color constancy using a pair of 3-D Color Mondrian displays made of two identical sets of painted wooden shapes. There are only 6-chromatic, and 5-achromatic paints applied to nearly 100 block facets. The three-dimensional nature of these test targets adds shadows and multiple reflections not found in flat Mondrians. Observers viewed one set in uniform illumination–Low-Dynamic-Range(LDR); the other in highly directional non-uniform illumination–High-Dynamic Range(HDR). Both 3-D Mondrians, were side-by-side, in the same room, at the same time. We used two measurement techniques to evaluate how well the appearances correlated with the object's reflectances. First, we asked observers to compare the appearances of individual three-dimensional surfaces having identical reflectances, and recorded these changes in appearance using magnitude estimation. Second, an author painted a reproduction of the pair of Mondrians using watercolors. We measured the watercolor reflectances of the corresponding areas to quantify the change in appearances. Both measurements give us important data on how reflectance, illumination and image structure affect color constancy. A constant paint does not exhibit perfect color constancy, but rather shows significant shifts in lightness, hue and chroma in response to non-uniform illumination.
In a classic 1931 paper, Jones and Condit measured the luminance range in 130 natural scene “… to determine the perfection with which the tonal characteristics of a given scene can be reproduced by the photographic process”. And while their data served photography well over 70 years, their results in no way represent what we see every day in both dynamic range and color. Yet, today's media are approaching such a standard, and it seems as important as it was then to revisit the Jones and Condit study in this larger context through the auspices of Fairchild's HDR photographic survey that captured and documented over a hundred natural scenes in their fullest range of luminance and color. Analysis of these scenes found contrast ratios or within scene dynamic ranges averaging 3 orders of magnitude approaching 6 orders at the 3 sigma limits of their distribution. By contrast, Jones and Condit found an average of 160:1 with a maximum value of 750:1, certainly less than 3 orders of magnitude in total. Perhaps not surprising, a large proportion of the distribution of color in these scenes were largely confined to in and around the neutral axis, However, a small, but significant portion was found that almost fill the gamut of all possible colors in CIE chromaticity space, certainly well outside the current digital cinema and video standards for color.
The dependence of an object's colour on the illuminant chromaticity makes it difficult to use colour as a reliable cue in machine vision applications, particularly in naturally illuminated high dynamic range scenes. To solve this problem the outputs from four logarithmic sensors with different spectral responses can be used to obtain a two dimensional description of an object's chromaticity that is independent of the illuminant. The spectral responses of these four sensors have been optimised. A simple test of colour separability then suggests that using the data from these sensors it is possible to match the ability of the human visual system to separate similar colours. A comparison of the performance of the proposed system when the reflectance and illuminantion data are both changed suggests that readily available data (Munsell reflectance spectra and CIE standard daylight spectra) can be used to design a generic system to separate colours that are described as matching each other. However, for applications that require discrimination between very well matched colours it may be necessary to use an application specific system designed using data relevant to the application.
In this paper we compare different image quality measures for the gamut mapping problem, and validate them using psycho-visual data from four recent gamut mapping studies. The psycho-visual data are choice data of the form: given an original image and two images obtained by applying different gamut mapping algorithms, an observer chooses the one that reproduces the original image better in his/her opinion. The scoring function used to validate the quality measures is the hit rate, i.e., the percentage of correct choice predictions on data from the psycho-visual tests. We also propose a new image quality measure based on the difference in color and local contrast. This measure compares well to the measures from the literature on our psycho-visual data. Some of these measures predict the observer's preferences equally well as scaling methods like Thurstone's method or conjoint analysis that are used to evaluate the psycho-visual tests. This is remarkable in the sense that the scaling methods are based on the experimental data, whereas the quality measures are independent of this data.
In this paper, we proposed and validated SV-CIELAB which is a video quality assessment (VQA) method using a spatio-velocity contrast sensitivity function (SV-CSF). The SV-CSF consists of the relationship among contrast sensitivities, spatial frequencies and velocities of stimuli. We used the SV-CSF for filtering original and distorted videos. The criteria in our method are obtained by calculating CIELAB color differences between filtered videos. From the experimental results for the validation, it was shown that SV-CIELAB is the more efficient VQA method than conventional methods such as CIELAB color difference, Spatial-CIELAB and so on.
An anaglyph is a 3D stereo image that uses color to present both left- and right-eye views of a scene in a single image. Color filter glasses are used to direct the views to each eye. The “anaglyph problem” is, given the characteristics of the display and filters, find the color for each pixel in the anaglyph that best delivers the stereo pair. “Best” in this context is to avoid undesirable visual artifacts such as retinal rivalry and stereo crosstalk while maximizing perceived color fidelity. A vector formulation of the anaglyph problem for additive displays is presented and solutions for it that minimize rivalry and minimize crosstalk are identified. Factors such as adaptation and display range clipping are included in the solutions. Example anaglyph images using the methods described are presented.
In the context of anti-piracy / anti-camcorder in digital cinemas a major requirement is invisibility of inserted pattern for the legal viewer watching the main cinema screen. On the basis of a multispectral projection system developed to create color metamers, we faced the need to explore and understand better color vision variability.This paper first presents the application context and the problem to solve. Then the CIE-2006 model is introduced, complemented with a brief overview of genetic aspects of color vision, as these were the basis for the study. Variability of color vision is then evaluated, first on a set of Stiles and Burch observers and then on observers in our digital cinema context.Two experiments are then presented, incremental steps towards the system improvement. The two experiments reveal coherent disparities for the different observers, modeling these disparities as L- and M- cone fundamentals shifts. The improvement between the first and the second experiment is shown in terms of closeness to a standard observer.This gain is a clear result of integrating color perception knowledge in the system design. It is as well a definite step towards acceptance of the metamer images by a wide and varied population of cinema observers.
Digital cinema is a very challenging field that will allow providing cinema material with a very high quality level. In the last years the key role of the Human Visual System (HVS) in the final perceived quality of the compressed images, becomes an undeniable reality. Therefore, it is natural to take advantage of the recent knowledge of human visual perception and models in an image compression system. Thus, in this paper we propose a reproducible technique for improving the perceptual JPEG 2000 image compression quality with a digital cinema profile. This technique consists of two main parts: a laboratory evaluation of a HVS model by the Contrast Sensitivity function (CSF), and the implementation of visual weightings for the JPEG 2000 scheme, using the evaluated HVS model, in the spatial Fourier domain of the wavelet decomposition sub-bands.
A research project is underway to create a physical library of artist material samples and corresponding BRDF measurements that characterize their optical properties. This collection will provide necessary data for the development of a gonio-imaging system for use in museums to more accurately document their collections. A sample set has been produced consisting of 26 panels containing over 600 unique samples. Selected materials are representative of those commonly used by artists both past and present. These take into account the variability in visual appearance resulting from the materials and application techniques used. Five attributes of variability were identified including medium, color, substrate, application technique and overcoat. Combinations of these attributes were selected based on those commonly observed in museum collections and suggested by surveying experts in the field.For each sample material, RGB and spectral image data will be collected and used to measure an average bi-directional reflectance distribution function (BRDF) and a pixel-wise BRDF will be fit using parameterized models. These results will be made available online as a searchable database. Additionally, it will include specifications for each sample, and other information useful for computer graphics rendering such as the raw image data, normal map and height maps.
CIECAM02 has been used to predict colour appearance under a wide range of viewing conditions, to quantify colour differences, to provide a uniform colour space and to provide a profile connection space for colour management. However, several problems have been identified and this paper reports on the progress that has been made to improve and extend the CIECAM02 model to solve some of these problems.