Accurately previewing the appearance of a print job can make the difference between producing saleable output and wasting expensive materials and is a challenge to which a host of solutions already exist. However, what the majority of these have in common is that they base their predictions on the inputs to a printing system (e.g., continuous-tone data in ink channels) instead of its outputs (i.e., the halftone data that is then printed) and that they are only valid for a given set of choices already made in the printing system (e.g., color separation and halftoning). Alternatively, attempting to make appearance predictions using general-purpose models such as Kubelka Munk, Yule Nielsen and Neugebauer results in limited performance on systems whose behavior diverges from these models' assumptions, such as inkjet printing. As a result of such constraints, the resulting previews either work only under limited conditions or fail to predict some artifacts while erroneously predicting others that do not materialize in print. The approach presented here takes advantage of the flexibility of the HANS framework and the insights into spectral correlation to deliver a print preview solution that can be applied to any printing system, that allows for the variation of fundamental imaging choices without the need for re-computing model parameters and that delivers ICC-profile-level accuracy.
Predicting simultaneously the spectral reflectance and transmittance of halftone prints is now possible thanks to a recently developed model based flux-transfer matrices, called Duplex Primary Reflectance-Transmittance model, valid for single-face printing as well as duplex printing. The model can be calibrated from either spectral reflectance measurements or spectral transmittance measurements; but it can also be calibrated from both measurements by minimizing the distance between the theoretical transfer matrices and experimental transfer matrices. According to the test carried out with paper printed in inkjet, the predictive performances of DPRT model, coupled with the new calibration method, are good enough to permit interesting applications in graphical arts, such as the display of multiple images depending on whether the light source is in front of the duplex color print or beside it.
Four practical methods were evaluated for determining the spectral responsivity of a Nikon D200 digital camera: (1) Direct measurement by exposure to a monochromator sequence at 5nm intervals; (2) LED colour target with 36 luminous circular patches; (3) filtered illumination, using 16 transmission filters of 20nm bandwidth; (4) filtered camera, using the same set of filters. Taking the monochromator measurements as the reference, the results of the other three methods showed a good correspondence, but systematic differences were observed for fluorescent illumination. Because of the wavelength shift in filter transmittance with angle of incidence, it is recommended that sources with smooth spectral power distributions should be used for filter-based characterisation.
Andersen and Hardeberg proposed the Hue Plane Preserving Colour Correction (HPPCC) [1], which maps RGBs to XYZs using a set of linear transforms, where each transform is learned and applied in a subregion of colour space, defined by two adjacent hue planes. A hue plane is a geometrical half-plane defined by the neutral axis and a chromatic colour. A problem with the original HPCC method is that the selection of chromatic colors was a user defined choice (and the user might choose poorly) and the method as formed was not open to optimization. In this paper we present a flexible method of hue plane preserving colour correction which we call Hue Plane Preserving Colour Correction using Constrained Least Squares (HPPCC-CLSQ). This colorimetric characterization method is also based on a series of 3 by 3 matrices, each responsible for the transformation of a subregion, defined by two adjacent hue planes, of camera RGB values to the corresponding subregion of estimated colorimetric XYZ values. The matrices are constrained to white point preservation. In this new formulation, the subregions can flexibly be chosen in number and position in order to regularize and optimize the results, whilst constraining continuity crossing the hue planes. The method is compared to a choice of other state-of-the-art characterization methods and the results show that our method consistently gives high colorimetric accuracy for both synthetic and real camera data.
In this paper, we obtain individual difference on variation of melanin component which greatly affect apparent age. We consider frequency of use for UV protection as the factor causing individual difference in aging. It is known that the exposure of UV rays produces melanin pigment in our skin, which promotes aging of a skin such as darkening and unevenness of a skin color. In our previous work, we applied principal component analysis (PCA) to skin color pigmentation distribution and obtained feature values. By changing feature values, we simulated the appearance of human face in arbitrary age. According to this, it is revealed that melanin component in around cheeks especially tends to increase with aging. However, in the previous method, averaged feature values are used for each ages in analysis, and individual difference should be considered at the next step of research. In this paper, we constructed database that have facial image taken in 2003 and 2010 where the same 77 people were subjective. The frequency of use for UV protection was also recorded as lifestyle habit. By applying the same analysis in the previous method, we obtained score shift from the data in 2003 and 2010. From these trends of the shift, we found that one-fourth people can get lightskinned face after 7 years if they use UV protection throughout the year.
Color discrepancies between the left and right image in a stereoscopic image cause many problems, including a reduction of the three-dimensional effect and increased visual fatigue. Thus, color matching in a stereoscopic image is very important for three-dimensional display systems. Therefore, a hierarchical integrated color matching method based on image decomposition is proposed for stereoscopic images. In the proposed method, global and local color discrepancies generated in a stereoscopic image are effectively reduced by histogram matching and illuminant estimation using image decomposition. The stereoscopic image is first decomposed into a base layer and several texture layers. Each decomposed layer is then matched using cumulative histogram matching and a multi-scale retinex algorithm. Lastly, inverse decomposition is applied to each layer to reconstruct the corrected stereoscopic image. Experimental results show that the proposed method has a better color matching performance in comparison with previous methods.
Active illumination based multispectral imaging using LED light sources has got much attention in recent years due to the availability of wide range of narrow band color LEDs spreading across the visible as well as infrared regions. We proposed a fast multispectral imaging using LED illumination and an RGB camera (RGB-LEDMSI) along with a novel LED selection method. In this paper, we present and analyze the results from real world experiments on the proposed RGB-LEDMSI. We built a prototype 9-band RGB-LEDMSI system using commercial iQ-LED panels and a Nikon D600 camera. The experimental results from the prototype system confirm the effectiveness of the proposed system.
The Spectral Edge method of image fusion fuses input image details, while maintaining natural color. It is a derivative-based technique, based on the structure tensor, and lookup-table-based gradient field reconstruction. It has many applications, including RGB-NIR image fusion and remote sensing. In this paper, we propose adding an iterative step to the method. We use the output Spectral Edge image as the putative color image for another fusion step, and repeat this for several iterations. We show that this creates an output image with a structure tensor field closer to that of the high-dimensional input than the output of the original method. We perform a psychophysical experiment using the iterative Spectral Edge method for RGB-NIR image fusion, which shows that the result of multiple iterations is preferred.
We propose a method for relighting fluorescent objects using a multiband image system. Fluorescence is often present in everyday articles and its optical properties make it harder to reproduce than pure reflectance. Decomposing colors into separate fluorescent and reflective components is required for accurate color reproduction. In our method, bi-spectral images are captured using an eight-band camera and an eight-band lighting system for the decomposition. First, spectral reflectance of the object is estimated. Next, the multiband image of the fluorescent components under relighting illumination is generated from the weighted linear combination of the captured fluorescent images. The weight of each band is calculated from the spectral transmittance of each band-pass filter attached to the lighting system, and the spectral power distributions (SPD) of the illumination during image capturing and relighting. Finally, a relighted image is reproduced by composing reflective and fluorescent component images. Experimental evaluations show that the spectral reflectance and SPD of fluorescent components are accurately estimated and relighting images are well reproduced.
Spectral printing aims to achieve an illuminant-invariant match between the original and the reproduction. Due to limited printer spectral gamuts, an errorless spectral reproduction is mostly impossible, and spectral gamut mapping is required to reduce perceptual errors. The recently proposed paramermismatch-based spectral gamut mapping (PMSGM) strategy minimizes such errors. However, due to its pixel-wise processing, it may result in severely different tonal values for spectrally similar adjacent pixels, causing unwanted edges (banding) in the final printout. While the addition of some noise to the a* and b* channels of the colorimetric (e.g., CIELAB) image—rendered for the first illuminant—prior to gamut mapping solves the banding problem, it adversely increases the image graininess. In this article, the authors combine the PMSGM strategy with subsequent spectral separation, considering the spatial neighborhood within the tonal-value space and the illuminant-dependent perceptual spaces to directly compute tonal values. Their results show significant improvements to the PMSGM method in terms of avoiding banding artifacts.