In this study we present a technique to embed a digital watermark containing copyright information into a spectral image. The watermark is embedded in a transform domain of the spectral dimension of the image. The transform domain is derived by performing the principal component analysis (PCA) transform on the original image. The watermark is embedded by modifying the coefficients of the eigenvectors from the PCA transform. After modification the image is reconstructed by the inverse PCA transform, thus containing the watermark. We provide analysis of watermark's imperceptibility and robustness against attacks with various parameter values in embedding and in attacks. The attacks include lossy image compression by the wavelet transform, median filtering and mean filtering. Experimental results indicate that the watermarked image is very similar to the original one and the watermark can be extracted with reasonable visual fidelity from the image after the attacks.
We demonstrate for the multi-ink printers investigated that there are many spectral reflectances that can be produced approximately by a single printer through a large variety of different ink combinations. This spectral redundancy was evaluated for a pair of six-ink ink jet CMYKGO printers. Through use of the lookup tables, density maps were built illustrating the six-dimensional distribution of redundancy throughout colorant space. A tolerance of 0.01 RMS showed none of the inks in our CMYKGO systems to be fully replaceable by combinations of the other inks. However, when the tolerance was relaxed to 0.02 RMS, the degrees of freedom for matching spectra in the systems fall to five because the five chromatic inks cover the entire spectral gamut without the need of the black ink. Systematic relationships among the inks are reported, detailing the likelihood that combinations of printer digital counts may be replaced by largely different ones while preserving spectral reflectance to within an RMS spectral reflectance factor tolerance.
Deriving the actual multispectral data from the output of the acquisition system is a key problem in the field of multispectral imaging. Solving it requires a characterization method and the training set (if any) on which the method relies. In this paper we propose three novel approaches in selecting a training set to be used for the characterization of a multispectral acquisition system. The first approach, which we call the Hue Analysis Method, is based on colorimetric considerations; the second and third approaches, which we call the Camera Output Analysis Method and the Linear Distance Maximization Method respectively, are mainly based on algebraic and geometrical facts. In all three cases the selected training sets will have relatively low numerosity and broad applicability. We also test our three approaches, as well as an approach from another author and a random selection method, on the data obtained from a real acquisition. We then compare the reconstructed reflectances with the measurements obtained using a spectrophotometer. Our results indicate that all our methods can be substantial improvements compared to a random selection of the training set, and that the performances of the Linear Distance Maximization Method make it the best choice among all the methods tried for application in a general context.
An image quality investigation of visible spectral imaging systems was performed. Spectral images were simulated using different combinations of imaging system parameters with different numbers of imaging channels, wavelength steps, and noise levels based on two practical spectral imaging systems. A mean opinion score (MOS) was determined from a subjective visual assessment scale experiment for image quality of spectral images, rendered to a three-channel LCD display. A set of image distortion measures, including color difference for color images, were defined based on image quality concerns. The relationships between the distortion factors and the combinations of parameters in spectral imaging systems are discussed in detail. The MOS values and distortion measures were highly correlated. The results indicate that the image quality of spectral imaging systems was significantly affected by the number of channels used with noise in the image capture stage. The selection of wavelength steps had no significant impact on final image quality, especially when there was no noise involved. The results also showed that the contrast factor indicates a different impact on image quality for human portraits compared to other relatively complex scene images. An empirical metric is proposed to estimate the scaled image quality. The correlation between this metric and the subjective measure, MOS, was 0.97. The results also indicate that two distortion factor eigenvectors were sufficient to represent four distortion factors used in this experiment. This suggests that further research needs to be performed to find more efficient distortion factors.
The spatial distributions of melanin and hemoglobin in human skin can be determined by image-based skin chromophore analysis including independent component analysis (ICA) of a skin color image. The separation is based on the skin color model in the optical density domain to quantify the change in the chromophores. In this paper, the analysis technique developed by Tsumura et al. was applied to many skin images, and the distribution of skin chromophores, such as melanin and hemoglobin, agreed well with the physiological knowledge. The effectiveness of cosmetic products was also evaluated by observing the changes in the amount of each chromophore. Finally a simulation to synthesize the changes in skin chromophores was performed to demonstrate its validity.
In this article, we propose an effective method for gonio-spectral imaging of paper and cloth samples under oblique illumination conditions. High resolution gonio-spectral images are synthesized from a basic spectral image and gonio-monochrome images. The method is based on gonio-spectral image fusion composed of two components, conventional spatial fusion and geometrical fusion. The proposed geometrical fusion synthesizes images at different geometries and is introduced by modeling the optical reflection properties with a dichromatic reflection model. The validity of this method is confirmed by gonio-spectral measurements of paper samples, and experiments are performed on Japanese washi paper and European cloth.
Can we assess artwork reproduction from the standpoint of properly adjusted image management process? This study integrates the analysis of artwork reproductions obtained by three different image capturing systems. The artwork examined in this study belongs to the bright-shadow, visually presentable category and the reproduction quality analysis was performed visually and instrumentally. The visual study included formal/methodological characteristics of the artwork. The purpose of this research is to determine whether there is any correlation between subjective and objective reproduction assessment when using different image capturing systems. Based on the results acquired, it has been shown that brightness is decisive for quality reproduction. This finding has been confirmed by instrumental measurement and visual evaluation. However, artwork that belongs to different visually presentable systems may require other criteria for the evaluation of quality artwork reproduction.
In this study, an algorithm was developed for the rendering of texture images with high color fidelity. The algorithm is based on the dichromatic reflection model, which recovers the implicit geometrical information of each pixel position in the texture image by exploiting the interaction between object surface and light. Using the recovered implicit geometrical information and a target color, a new texture image can then be synthesized. The synthesized image was then further modified to give correct texture strength. The algorithm developed in this study can be used for both color and gray texture images. It is suitable for the applications in texture simulation and visualization with the advantage of high color fidelity.
A novel algorithm to retrieve the complete structure from long image sequences captured by a hand held camera is proposed. Firstly, images taken around the target are divided into several subsets. Each subset has common feature points. Secondly, Euclidean reconstruction is obtained by iterative factorization with all of these points visible in each image of a certain subset. Then results coming from different subsets are brought into a common coordinate frame by similarity transformations. Finally, global optimization is applied to refine the data and produce a jointly optimal 3D structure. A significant merit of the algorithm is that it sets no restrictions on the input image sequence. The algorithm has been tested on both simulated data and real image sequences, and very realistic 3D models are obtained.