For digital camera systems, transforming from the native camera RGB signals into an intermediate working space is often required, with common examples involving transformations into XYZ or the sRGB. For scene-linear camera signals, by far the most common approach utilizes 3x3 matrices. For color pipelines designed for Rec709 displays, matrix-based input transforms are capable of producing reasonable accuracy in this domain. However, the associated colorimetric errors can become significant for saturated colors, for example those beyond Rec709. To address this shortfall, a novel input color transformation method has been developed that involves the use of two dimensional lookup tables (LUTs). Because the surfaces associated with the 2D LUTs possess many degrees of freedom, highly accurate colorimetric transformations can be achieved. For several cinematic and broadcast cameras tested, this new transformation method consistently shows a modest reduction of mean deltaE errors for lower-saturation colors. The improvement in accuracy becomes much more significant as saturation increases, such that the mean deltaE errors are reduced by more than a factor of three.
The paramer mismatch-based spectral gamut mapping framework is an approach which optimizes the spectral reproduction colorimetrically for multiple viewing conditions. Unfortunately, due to the pixelwise nature of this method, almost similar neighboring pixels might be mapped to completely different colorants which yield disturbing banding artifacts. The previous proposed solution for this problem adds some noise to the a* and b* channels of the input images prior to calculating the separation image. Even though this procedure solves the problem of banding artifacts, it adversely affects the graininess of the final print. In this paper, we propose an approach based on both colorimetric and spatial criteria to reduce banding artifacts of the final print. To our knowledge, the proposed method is the first attempt of joint spatio-spectral gamut mapping and separation. It leads to smoother spectral separations by preserving image edges but is still not completely free of artifacts.
LED illumination based multispectral imaging is getting much attention in recent years due to its fast computer controlled switching ability, availability of many different LEDs, robustness, and cost effectiveness. In this paper, we propose a system which uses an RGB camera along with two or three combinations of three different types of LEDs in order to acquire multispectral images of six or nine channels. Optimal LED combinations are selected so as to produce accurate estimate of spectral reflectance and/or color. The system is rather simple to realize. Moreover, it is faster as it requires only two or three shots, unlike state of the art multiplexed LED illumination based systems which require as many shots as the number of channels that a system can acquire. The proposed system can be useful in general multispectral imaging applications. The system has been evaluated with both the natural images and paintings. The results from the simulation experiments were promising, indicating the possibility of the proposed system as a practical and feasible method of multispectral imaging.
For accurate color and spectral reflectance reproduction, we propose a novel eleven-band acquisition system using a nine-view stereo camera. The proposed system consists of eight monochrome cameras with eight different narrow band-pass filters and an RGB camera. To generate an eleven-band image, the shapes of the nine captured stereo images are transformed to correct registration displacement caused by stereo parallax. In the process of correspondence search between stereo images, the phase-only correlation method (POC) is used. The most significant point of our method is that the captured RGB image is converted into narrow-band images for accurate correspondence search. The detected corresponding points are used for estimating parameters of image transformation and an eleven-band image is generated. By comparing with conventional method, experimental results show that the accuracy of correspondence detection and spectral reflection estimation is improved
Spectral reflectance is a key material property and contributor to object appearance. While it has long been known that reflectance in a given wavelength interval correlates strongly with reflectances in neighboring ones, this correlative property has only been exploited implicitly before. The present paper therefore presents a new approach to spectral analysis and synthesis that consists of first deriving a spectral correlation profile and then using it for a direct and full sampling of the spectral and color gamuts corresponding to it. The resulting technique can be used to generate natural-like spectra (or spectra following other, specific correlation properties) and it can also be incorporated into Bayesian models of spectral estimation.
The recovery of spectral reflectances from camera responses is usually composed of several distinct operations. We propose a new approach that connects edge-preserving denoising, deblurring and spectral reconstruction. Each of the steps is based on actual physical properties of the camera system that can be obtained using established methods, and the filter eliminates the need for manual sharpening of the images. Results on both real images and synthetic data show significant improvements over previous methods for spectral reconstruction.
Thanks to the advancement of technologies, we may be having more flexibility to determine the spectral power distribution (SPD) of light sources. Suppose any SPD is possible, we derive “extreme SPD of light source” aiming at a specific purpose such as the lowest energy, the largest color gamut, the lowest impact to fine arts, etc. We found that these SPDs always consist of multiple spikes when very high CRI is not required while the SPD of the black body radiation is continuous in wavelength. In order to investigate the effect of such light sources to human visual system and camera system, we employ two types of such light sources, namely Maximum White Luminous Efficacy of Radiation (MWLER), which gives the best energy efficiency, and Maximum Gamut Area (MGA), which gives the largest color gamut size. Both MWLER and MGA are composed of multiple spikes in wavelength. We generate such SPDs with respect to 6 types of existing light sources with same CCT and CRI (if applicable), and evaluate how sensitive these are with 10 sets of color matching functions (CMFs) given by Stiles and Burch as human visual system and 4 sets of digital camera sensitivities by computer simulation. We presume a color matrix of color conversion for CMFs and camera is adjusted minimizing errors with a Macbeth Color Checker under black body radiation with the white point constraint. With this assumption, we evaluate colorimetric error under the two extreme SPDs in addition to black body radiation and existing light sources. We find that cameras give large error (more than 20 in ΔE*ab) for these spiky light sources which may not be accepted by users even when they are in a tolerable error range for the human visual system. It is concluded that such spiky light source could be used without problem for a variation of CMFs, but it would be problematic for color reproduction of cameras.
The monitor brightness is affected by surround condition. The perceived brightness values of six test stimuli with different luminance levels were estimated using magnitude estimation technique to investigate the surround luminance effect. Each of the test stimuli was displayed on a LCD monitor. The nine surround conditions were controlled by illuminator which was placed behind the monitor. The range of surround ratio, SR, was varied from 0.3 to 3.8. It was found that the perceived brightness of each test stimulus decreases when surround ratio (SR) is higher than 1 compared to that under dark room. CIECAM02 brightness predictor, Q, was tested resulting in poor performance. CIECAM02 predicts that Q keeps increasing even when SR is higher than 1. The Q in CIECAM02 is strongly influenced by parameter c and LA. For this reason, new c value for dim, average and bright condition is proposed as a log function of SR based on the new brightness data.
Color Appearance Models are successfully used to model the color perception differences seen when the same stimuli are presented on different media, e.g. hard copy or a self-luminous display. It is currently unknown if the similar effects are present in gloss perception and if there is need for Gloss Appearance Models.Gloss communication, and the higher level material appearance communication is becoming more important everyday with the increase in customized manufacturing and the need for the costumer to preview a final product while short-runs, time and cost constraints prohibit the use of hard-copy proofs.Three experiments are proposed in order to analyze this phenomenon. The Gloss matching performance of observers on real objects is first going to be studied. Then, the same experiment will be repeated with synthetic images. Finally, a cross-media matching experiment will be performed, where the observers will have to match a real material with synthetic representations.The same trend was observed in the experiment using only real objects and in the cross-media situation, where a high matching accuracy was obtained for low gloss samples, and the gloss of mid and high gloss samples was underestimated. The same accuracy for low gloss samples was obtained for the experiment with only synthetic images, but mid and high gloss samples were overestimated. The sensitivity of the observers was higher when only real samples were used, it decreased when the display was used due the lack of visual disparity and multiple viewing conditions, and it was lowest on the last experiment, influenced by the multiple media and the above limitations.
There are two very different kinds of color constancy. One kind studies the ability of humans to be insensitive to the spectral composition of scene illumination. The second studies computer vision techniques for calculating the surface reflectances of objects in variable illumination. Camera-measured chromaticity has been used as a tool in computer vision scene analysis. This paper measures the ColorChecker test target in uniform illumination to verify the accuracy of scene capture. We identify the limitations of sRGB camera standards, the dynamic range limits of RAW scene captures, and the presence of camera veiling glare in areas darker than middle gray. Measurements of scene radiances and chromaticities with spot meters are much more accurate than camera capture due to scene-dependent veiling glare. Camera capture data must be verified by calibration.