Back to articles
Work Presented at Electronic Imaging 2024
Volume: 67 | Article ID: 060408
Image
Spectral Estimation: Its Behaviour as a SAT and Implementation in Colour Management
  DOI :  10.2352/J.ImagingSci.Technol.2023.67.6.060408  Published OnlineNovember 2023
Abstract
Abstract

Methods for estimating spectral reflectance from XYZ colorimetry were evaluated using a range of different types of training datasets. The results show that when a measurement dataset with similar primary colorants (and therefore having similar reflectance curves) are used for training, the RMSE errors and metameric differences under different illuminants are the lowest. This study demonstrates that, a training data can be mapped to represent spectral data for a group of print data based on matching material components (spectral similarity) with the test data, and obtain spectral estimates with satisfactory spectral and colorimetric outcomes. The findings suggest that using polynomial bases or colorimetric weighted bases with least squares fit produced estimated reflectances with low metameric mismatches under different illuminants. For the two best performing spectral estimation methods their ability to predict tristimulus values were assessed with tristimulus calculated using the measured reflectances and a destination illuminant. Their performances were also compared to the colour predictions obtained from different CATs and MATs under varying lighting conditions. The results show that a spectral estimation method with specific training dataset can serve as a good alternative to predict XY Z under different illuminants with reduced metameric mismatch i.e. they can be used as a material adjustment transform. These results finally help in proposing a spectral estimation workflow that can be integrated into colour management such that it is simple to implement, fast in processing, spectrally accurate with low metameric mismatch.

Subject Areas :
Views 99
Downloads 47
 articleview.views 99
 articleview.downloads 47
  Cite this article 

Tanzima Habib, Phil Green, Peter Nussbaum, "Spectral Estimation: Its Behaviour as a SAT and Implementation in Colour Managementin Journal of Imaging Science and Technology,  2023,  pp 1 - 23,  https://doi.org/10.2352/J.ImagingSci.Technol.2023.67.6.060408

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2023
 Open access
  Article timeline 
  • received August 2023
  • accepted December 2023
  • PublishedNovember 2023
jist
JIMTE6
Journal of Imaging Science and Technology
J. Imaging Sci. Technol.
J. Imaging Sci. Technol.
1062-3701
1943-3522
Society for Imaging Science and Technology
1.
Introduction
Spectral data is of increasing importance in color reproduction workflows, for instance, in brand identity colours and spot colour reproduction [1], colour characterisation or separation [25], gamut mapping [6], scene-referred image reproduction [79], and soft proofing [10, 11], where it may aid in increasing accuracy and reducing metameric mismatches. When measured spectral data is not available, it can be estimated from colorimetric data using a spectral estimation method in conjunction with suitable training spectral data.
Spectral estimation methods are widely used in colour imaging. Some of the most common uses of spectral estimation methods in this domain are for estimating reflectances of material surface or objects in a scene or estimating spectral sensitivity functions of a camera sensor. In this paper, we focus on spectral estimation of reflectance from tristimulus values such that the estimated reflectance is representative of the material object, the colorants and substrate that combine to form the spectral stimulus, in particular. There is a plethora of spectral estimation methods in the literature. The pseudo inverse is the simplest least squared solution for spectral estimation [12, 13], while the most commonly used method is principal component analysis (PCA) [1417]. Fairman and Brill, proposed an efficient PCA-based method to estimate reflectance from tristimulus values, which was further modified by Agahian et al. by weighting the training reflectances based on minimizing the colorimetric difference from the test value. Using Cohen’s Matrix R theory to calculate fundamental stimuli and metameric black, Zhao and Berns showed how both colorimetric and spectral transformations can be used to estimate accurate spectra from a captured image [18]. The Wiener estimation method has been used to estimate spectra from RGB images [19], and there have been methods developed to impose a non-negativity constraint on estimated spectra [20, 21]. Lopez-Alvarez et al. demonstrated the effectiveness of the Wiener estimation method in accurately estimating skylight spectral curves using a limited set of training reflectances [22]. They highlighted that this method eliminates the requirement for measuring spectral sensitivity or calculating linear bases and is robust to noise. Dupont studied spectral estimation from colorimetric values using different optimization methods including genetic algorithms and used a metric that minimized colour difference under two different lights [23]. Other spectral estimation algorithms use optimization to minimize spectral and colorimetric errors [2426], and there are methods which use interpolation [2729]. Spectral estimation methods can also be applied to achieve colour-constant estimated spectra which is a desirable property [30]. van Trigt stated that reflectance curves must preserve some amount of colour constancy, making observed colour shift more or less a characteristic of the metameric set [31]. Based on that, van Trigt proposed a spectral estimation method, which creates a generic reflectance curve for a tristimulus value and achieved this by generating the least variation or the smoothest reflectance for a tristimulus value [31, 32]. Preserving colour constancy is a different task from estimating reflectances that match the spectral characteristics of a given material.
Estimated spectra from colorimetric values are essentially metameric reflectances, such that they match in colour under the colorimetry of the illuminant used to train the model. If a spectral estimation method produces a reflectance that is an exact match to the measured reflectance of an object then there will be no metameric mismatch under changing lighting conditions. It can be argued that these two reflectances have the same characteristics and represent a perfect match of the optical properties of the material. Based on these considerations, in the case of reflectance estimation from tristimulus values, we can select training datasets such that they have similar material components to those of the test datasets. For example, for print datasets matching printing conditions could be a way to match important components that comprise a printed material, such as ink pigments and substrate with or without concerning the printing process. It will also be important to find out what kind of similarities are enough for standardisation of training datasets in spectral estimation workflow can be considered in the future.
The perception of colour in humans is a function of the colour stimulus and the cone sensitivities. Illuminant metamerism (i.e. when two objects with different reflectance functions are perceived similar in colour under one light but different under another light because of the difference in the spectral power distribution between the two lights) can be an issue in cross media colour reproduction. When we do not have spectral reflectance of an object, its tristimulus value under different illuminant can be predicted by a sensor adjustment transform, which may be a chromatic adaptation transform (CAT) or a material adjustment transform (MAT).
CATs predict corresponding colours. Corresponding colours are two tristimulus values that result in the same perceived colour when the two samples are seen in test and reference adapting conditions [33], and they are derived from experimental data where observers match colours under different illuminants [3436]. Bradford CAT, CAT02 and CAT16 are examples of CATs, that have used corresponding colour data to optimize transformation matrices used to convert tristimulus values to and from a sharpened cone space. The adaptation to the destination colorimetry is performed in cone space before converting to the adapted tristimulus value. These CATs draw their inspiration from Von Kries hypothesis on chromatic adaptation that states that there is a linear relationship between the stimulus and the visual response of the three cone types [37, 38]. The Linear Bradford transform, without the non-linear exponent, is recommended by ICC v4 specification [39]. CAT02 and CAT16 are part of CIECAM02 [40] and CIECAM16 [41] colour appearance models, respectively. CAT16 and CAT02 are analogous but they differ in their transformation matrices [36].
In the case of MATs, a material match of the object (i.e. how tristimulus values of an object change due to changes in either illuminant and/or observer [42]) is predicted rather than a perceived colour match [43]. Logvinenko defined a metameric mismatch volume, i.e., for a given colour under one illuminant, corresponding volume that contains all the possible colours under another illuminant [44]. This theory was evaluated by Zhang et al. by choosing all metameric reflectances under one illuminant from a large collection of measured reflectances and creating corresponding metameric mismatch volumes by predicting colours under different illuminants [45]. It was found that the centroid value of this volume can be used to predict colours with less metameric mismatch than well known CATs [45]. Logvinenko also defined an object-colour manifold which is a six-stimulus value (light colour and object colour together) and can be split into three dimensional material colour which shifts when the illumination changes [46]. Extending this concept to include a change in observer colour matching functions, Derhak developed a material adjustment transform that uses a colour equivalency representation called Wpt (Waypoint) [43]. Oleari, developed CATs by optimizing the conversion of tristimulus values under different viewing conditions to an ABC colour space, such that it preserves colour constancy [47]. Derhak argues that Oleari’s CATs are actually MATs because they optimize cone excitations [43]. van Trigt’s reflectance estimation method generates smooth reflectances with constant values near the endpoints of the visible range which improves colour constancy especially under illuminant A [31]. This method was modified by Burns and used to estimate reflectances in order to predict colour under a destination illuminant, and perform the final step towards chromatic adaptation by scaling the relative luminance Y of the destination tristimulus value to match the Y value of the source tristimulus value [48]. This modified spectral estimation method produces strictly positive and smooth reflectances. Although this procedure does not use corresponding colour data, it is considered to be a CAT rather than a MAT since it does not aim to match material attributes.
As discussed previously, estimated spectra from colorimetric values are basically metameric reflectances such that they match in tristimulus values under the illuminant used to train the model. Thus, it is interesting to evaluate if with careful selection of training data i.e. a good material match with the test data, can a spectral estimation method then be considered for material adjustment transform. However, it is easier to find training datasets for some applications than others, e.g. for spectral estimation of natural objects it would be important to have a training dataset with a wide range of representative reflectances but for a narrow application like colour chips or print datasets, the different material components such as pigments, substrate, fluorescent whitening agents etc. can be employed to categorise training reflectance datasets into groups of similar material types. It will be important to understand what kind of material component similarities between the training and test data are sufficient to obtain an acceptable metameric match. This will pave the way to have one training or reference spectral dataset to predict spectral reflectances for a group of print data that can be used in a colour management pipeline.
Colour management is built upon communicating relevant data for precise interpretation of colour content and utilizing colour conversion techniques to achieve a desired reproduction between imaging devices. The profile connection space (PCS) facilitates the accurate conversion between colour data from a source device to a destination device. In ICC.1 i.e. version 4 colour management, the PCS is colorimetric (either XYZ or CIELAB) which limits the integration of spectral reproduction workflows directly into colour management. The ICC.2 i.e. version 5, the new colour management architecture, allows spectral connection and processing through programmable calculator elements inside an ICC profile. This makes it possible to integrate spectral data into colour management workflows. The need for spectral estimation arises when the source profile is based on colorimetric PCS data, or when intermediate processing steps require spectral data. When converting from colorimetric to spectral in such a workflow, the spectral estimation method has to be simple, accurate and fast. The spectral estimation methods evaluated here are least squares fit, i.e. linear combination of bases that can be easily encoded using matrices into a colour management pipeline or other practical applications.
In this study, we evaluate the performance of different spectral estimation methods along with the training and test datasets whose material characteristics are known. We also evaluate the colorimetric performance of the two best performing spectral estimation methods under varying illuminants together with different training and test datasets and compare the results with other well known CATs and MATs. With the results obtained, we aim to answer the following questions:
Can a training dataset, chosen based on material similarities with different test datasets be grouped together? Therefore, can standardisation of such training data selection be proposed for spectral estimation in colour management?
Can such a spectral estimation method be used as an alternative to a material adjustment transform? What is their performance in predicting corresponding colours compared to a CAT?
What is a suitable spectral estimation workflow that can be easily integrated into colour management?
2.
Spectral Estimation Methods
In a least squares fit, the task is to find a solution x to an overdetermined set of equations Axb by minimizing the residuals ||Axb||, where ARnxm and bRn are known [49]. This equation can be rewritten for the purpose of obtaining spectral estimates from tristimulus values as shown in Eq. (1), where SRn is spectral reflectance and n is the number of wavelengths, TR3 is tristimulus value and MRnx3 is the linear mapping or the basis that minimizes the residuals. In this case, for a training dataset with lnumber of samples, S and T are known; we first determine the matrix M that will be used to estimate spectral reflectance from a test tristimulus value. Depending on the number of training samples, matrix S will then be of the size nxl and, the matrix T will be of the size 3xl.
(1)
S=MT.
A set of XY Z tristimulus values T is the summation of the product of the illuminant spectral power distribution E, colour matching functions O and the surface reflectance S at each wavelength. Let AT be the weighted colour matching functions given by AT = kOTdiag(E) where ORnx3, ERn and k is the normalizing constant. The tristimulus values can be represented in the matrix form as shown in Eq. (2).
(2)
T=ATS.
The simplest spectral estimation method is the pseudo-inverse method that takes advantage of Eq. (1) such that given a spectral reflectance training dataset and the corresponding tristimulus values from Eq. (2), M can be calculated using Eq. (3). Since, T is not a square matrix, to calculate its inverse, the Moore-Penrose inverse or the pseudo-inverse method is used.
(3)
M=STT(TTT)1
M can then be used to recover spectral reflectance Ŝ from any test tristimulus value T as shown in Eq. (4).
(4)
Ŝ=MT.
Babaei et al. argues that instead of creating a general basis M by giving equal weights to all the reflectance samples in the training dataset; to increase accuracy it is important to weight the training tristimulus values in proportion to its similarity or dissimilarity with respect to the test tristimulus values. This is achieved by calculating the inverse of the colour difference Eab between the training tristimulus values and the test tristimulus value for which spectral reflectance is to be estimated [12]. The weights matrix W is a diagonal matrix as shown in Eq. (5), where l is the total number of samples in the training dataset and e = 0.01 is a small constant to avoid division by 0. The new weighted basis M is calculated by replacing T with weighted tristimulus values T = TW and replacing S with weighted reflectances S = SW in Eq. (3). We refer to this as the weighted pseudo-inverse spectral estimation method.
(5)
W=1Eab,1+e0001Eab,2+e000001Eab,l+e
The pseudo-inverse spectral estimation method can also be modified to a polynomial fit. In order to apply a polynomial, the terms of a tristimulus value i.e. T = (X, Y, Z) have to be expanded according to the chosen polynomial order as shown in Table I. These expanded tristimulus values are used in Eq. (3) to obtain the basis matrix M. M can then be used to estimate spectral reflectance, where the terms of the test tristimulus value also have to be similarly expanded according to the respective polynomial order in Eq. (4). Increasing the order can lead to overfitting, and hence, it is important that to maintain the number of terms significantly lower than the number of samples. Also, in an optimisation problem, overfitting is handled by adding a term to the cost function in order to force the coefficient estimates toward zero and achieve a better generalisation. This is called regularisation. In this work with spectral data of print datasets, the general solution to least squares is considered without any regularisation or iterative optimisation.
Table I.
Polynomial expansion.
Sl. No.Polynomial OrderTerms
1.Second1, X, Y, Z, XY, XZ, Y Z, X2, Y 2, Z2
2.Third1, X, Y, Z, XY, XZ, Y Z, X2, Y 2, Z2, XY 2, XZ2, X2Y, X2Z, Y 2Z, Y Z2, XY Z, X3, Y 3, Z3
Another classical spectral estimation method is Wiener Inverse, where the correlation matrix μ of the spectral reflectance of training dataset S is used to generate basis M; thus, M is given by Eq. (6). By employing a simple way to determine the spectral similarity, this method produces spectral reflectance estimates that are more or less insensitive to the illuminant [50].
(6)
M=μA(ATμA)1.
Principal component analysis (PCA) determines a projection matrix which maximizes the variance in the dataset. Fairman and Brill introduced a classical mean centered PCA on spectral reflectances. The mean reflectance V o of the training dataset S is determined. The first 3 eigenvectors V are calculated from mean centered spectral reflectance of training dataset (SV o). Then, a column vector CR3 that contains the principal component coordinates has the relationship as shown in Eq. (7) [15].
(7)
S=VC+V0.
The tristimulus constrained principal component coordinates can be calculated by substituting Eq. (7) in Eq. (2) and rearranging for C as shown in Eq. (8), where T is the test tristimulus value.
(8)
C=(ATV)1(TATV0).
Once C is synthesised, spectral reflectance estimate Ŝ can be obtained by Eq. (9).
(9)
Ŝ=V0+VC.
Agahian et al. proposed a weighted version of the PCA spectral estimation that uses the weights matrix W in Eq. (5) to calculate weighted reflectance S = SW. The weighted coordinates C are obtained by calculating principal components V and mean reflectance V 0 from S in Eq. (8) [17]. This weighted PCA spectral estimation method increases accuracy of the reflectance estimates compared to classical PCA.
The Waypoint (Wpt) method uses spectral reflectance decomposition defined by Chau that represents spectral reflectance as the sum of scaled non-selective reflectance and a characteristic reflectance [42, 51]. The non-selective reflectance is a reflectance vector that reflects the same amount of light at every wavelength, making it invariant to wavelength. The characteristic reflectance, which is the wavelength selective component, is obtained by normalizing a reflectance vector such that the minimum value becomes 0 and the maximum value becomes 1; this was called the primary reflectance by Chau. Wpt coordinates are represented by (W, p, t), where W represents perceptive lightness, p and t represent perceptive chromaticness, that is, a combination of perceptive chroma and hue at constant perceptive lightness [43]. Wpt coordinates in polar form are represented by (W, c, h) where c is Wpt chroma and h is Wpt hue and are given by =p2+t2 and h = arctan2(t, p), respectively [42]. The tristimulus values in this case are first converted to polar Wpt coordinates (W, c, h) by using Wpt normalizing matrix determined for source observing conditions. Wpt hue is used to determine its corresponding characteristic reflectance. This method uses Munsell reflectances to determine characteristic reflectances but the latter can also be determined from other measured reflectances. These characteristic reflectances are divided into groups having constant hues that form a hue-plane. One of the characteristic reflectances is selected from this group to represent that hue-plane. The Wpt coordinates (W, c, h) of the characteristic reflectances and non-selective reflectance are calculated. Using W and c coordinates of the characteristic reflectance and W coordinate of the non-selective reflectance, the scalar of spectral whiteness g and the scalar of spectral saturation s are determined. A reflectance vector is then estimated by scaling non-selective reflectance with g, scaling the characteristic reflectance with s, and then finally combining them as described in Ref. [52].
Burns altered van Trigt’s method [31] for estimating spectral reflectance from tristimulus values, which involves using optimization to identify the unique reflectance curve with the smallest slope squared integrated over the visible wavelengths such that this curve matches the tristimulus values of the source [48]. Burns implemented this optimization in log reflectance, which ensures that the resulting curve is strictly positive [48]. This spectral estimation method is also evaluated and the results are compared to the performance of weighted pseudo inverse and third order polynomial spectral estimation methods in achieving least metameric mismatch.
3.
Chromatic Adaptation
Chromatic adaptation is the mechanism by which the human visual system tries to preserve an object’s perceived colour under different viewing conditions [53]. This colour adaptation mechanism has to be applied in a colour reproduction workflow to reproduce consistent colour appearance when the illuminants of the input and output colorimetry differ. In ICC.1 colour management, the reference intermediate colour space or Profile Connection Space (PCS) is defined as CIE colorimetry based on illuminant D50 and CIE 1931 2 Standard Observer. Hence if illuminant or observer of the input or output encodings differ then they have to be transformed to or from PCS. If the transform is by a CAT, the result is a corresponding colour match and not an exact colorimetric match [54]. These transforms can be represented as a 3 × 3 linear matrix transforms. CAT matrices are calculated by mapping corresponding colours which were obtained by carrying out memory based or haploscopic experiments. These corresponding colours are colour pairs, i.e., colour of a stimuli under a source illuminant that match in appearance to the colour of another stimuli under a destination illuminant. The foundation for this specific way of modelling chromatic adaptation with scaling factors applied to cone excitations was laid down by Johannes von Kries and most modern CATs are built upon it. Some current popular CATs are Linear Bradford, CAT02 and CAT16 that are discussed in this section.
Linear Bradford CAT is recommended by ICC v4 specification, i.e., based on Bradford CAT derived by Lam, consisting a non-linear correction in the blue region which is dropped in Linear Bradford CAT [35]. In Linear Bradford CAT, the transformation matrix that converts tristimulus values to cone excitation can be found in ICC v4 specification [39].
CIE TC8-01 [55] proposed CIECAM02 as a colour appearance model which uses CAT02 for chromatic adaptation. Transformation matrix (M) of CAT02 is used to convert tristimulus values to sharpened cone responses (RGB). Considering full adaptation, CAT02 then can be calculated similar to von Kries transformation as shown in Eq. (10), where, (Rw1, Gw1, Bw1) and (Rw2, Gw2, Bw2) are the sharpened cone responses of the source and destination white points, respectively; (X1, Y 1, Z1) is source tristimulus value and (X2, Y 2, Z2) is destination tristimulus value.
(10)
X2Y2Z2=M1Rw2Rw1000Gw2Gw1000Bw2Bw1MX1Y1Z1
CAT16 was developed by Li et al. to solve the computational issues that arose due to CAT02 [41]. CAT16 transformation matrix was optimized using several corresponding colour datasets. A chromatically adapted tristimulus value considering full adaptation and when luminance of the source and destination illuminants are equal, is obtained in a similar manner as shown in Eq. (10) where the transformation matrix of CAT02 is replaced by the transformation matrix of CAT16 [41].
Based on CAM02 model where illuminant E, corresponding to an equi-energy stimulus, was used as a reference or intermediate illuminant and a two-step CAT16 transformation was also developed [36] where, an intermediate transformation to and from illuminant E is included.
Derhak et al. defines a generalized term, sensor adjustment transforms (SAT), that includes both CAT and MAT. A CAT uses mapping of corresponding colour datasets to achieve appearance match under changing illuminants as discussed above, while a MAT uses least dissimilar color matching proposed by Logvinenko to achieve material constancy under changing illuminants. Therefore, a MAT is based on the concept that when illuminant varies, the appearance of a stimuli changes but due to some secondary mechanism the human visual system can still to some degree identify the material [43]. Based on this Derhak, et al. proposes Wpt transform, where the color of an object is mapped to coordinates (W, p, t), which is a colour equivalency representation and forms a waypoint to navigate between the source and destination colour viewing conditions. The optimized Wpt based MAT matrices from source colorimetry to Wpt representation are given in the Eq. (11) and Eq. (12), where M2,D65, M2,D50, M2,A, and M2,F11 are transformation from CIE 2 Standard Observer and illuminants D65, D50, A and F11, respectively, to Wpt coordinates [56]. The destination tristimulus values can be obtained from the inverse of these matrices applied accordingly to the Wpt coordinates.
(11)
M2,D65=0.029640.974870.002804.839164.731220.121170.542481.306711.67368,M2,D50=0.062650.038390.026694.685614.825630.372930.283501.500531.15101
(12)
M2,A=0.338101.300060.200484.402325.321341.364250.411032.178494.85343,M2,F11=0.123661.056590.106084.386114.636110.322990.374761.290982.59413
Oleari’s MAT are mathematical transformation matrices that are used to convert colors from one lighting condition to another, so that they appear the same to the human eye [47]. These matrices were obtained by optimizing the conversion of tristimulus values under different viewing conditions to an ABC colour space, such that it preserves colour constancy and, hence, produces the same perceived colour for different cone excitations under different illuminants and observers. Oleari’s transformation matrices from a source colorimetry to a destination colorimetry can be found in the publication [47].
Burns defines a CAT that is performed using spectral estimation [48]. This method is developed as an improvement to other traditional CATs, to make sure that the tristimulus values predicted under a destination illuminant does not fall outside of the spectral locus and does not depend on corresponding colour datasets for training or fine-tuning it [48]. Burns modified the approach to spectral estimation proposed by van Trigt [31] that uses optimization to find the unique reflectance curve with minimum slope squared integrated over the wavelengths of the visible range with the constraint that the reflectance curve produces the source tristimulus value. Burns carried out this optimization in log space of the reflectance curve so that the conversion from the log space creates a strictly positive reflectance curve. Two spectral power distributions are also estimated that lead to source and destination whitepoints respectively. Using these two illuminants, estimated reflectance curve and colour matching functions, the source tristimulus value and destination tristimulus value are calculated. The destination tristimulus value is further adjusted such that its chromaticity is preserved and its relative luminance Y matches the Y value of the source tristimulus value. This adjusted tristimulus value is the predicted corresponding colour. Although, this method doesn’t use any corresponding colour datasets, it also doesn’t try to achieve material constancy. It is regarded as a CAT because Burns motive was to use it to predict corresponding colours well.
The metameric mismatch results obtained from the CATs and MATs discussed in this section are compared to the results obtained from the two spectral estimation methods.
4.
Tests for evaluating spectral estimation methods
Three sets of tests are carried out with different aims of using spectral estimation methods. Eight spectral estimation methods discussed in Section 2, namely, Wiener method (Wiener), Classical PCA (cPCA), Weighted PCA (wPCA), Pseudo Inverse (PInverse), Weighted Pseudo Inverse (wPInverse), Second Order Polynomial (Polynomial 2), Third Order Polynomial (Polynomial 3) and Waypoint-based (Waypoint-R) methods were applied to model a reflectance or emission spectrum recovery from tristimulus values obtained under D50 illuminant and CIE 1931 2 Standard Observer colorimetry. The datasets used and the tests carried out are elucidated.
4.1
Datasets
To evaluate spectral estimation of surface reflectance from tristimulus values, seven datasets were selected. The Munsell dataset (M1) [57] is based on measurements of 1600 colour chips with a wide range of pigments. FOGRA51 (F1) [58] is a characterisation dataset for offset litho print on premium coated paper. The web offset print on lightweight coated dataset (W1) has 1600 samples. These three datasets are used for training purpose. There are two cold-set offset on newsprint datasets (N1 & N2) printed by different offset machines of two newsplants at different times which are used for both training and testing purposes [59]. Similarly, there are two digital print on textile datasets (T1 & T2) using two different textile printers on Haysign textile and Neschen Varitex textile, respectively, which are used for both training and testing purposes as summarised in Table II. Figure 1 shows the plots of newsprint datasets (a) N1 and (b) N2, web offset print on lightweight coated dataset (c) W1, textile datasets (d) T1 and (e) T2, and FOGRA51 dataset (f) F1, respectively. A reflectance can be considered smooth when the values of adjacent wavelengths vary gradually without sharp jumps. Both newsprint datasets N1 and N2 and lightweight coated dataset W1 has smooth and broad reflectance peaks as shown in Fig. 1(d), (e) and (f). Such, reflectances are less sensitive to colour shifts due to illuminant change as it allows for more flexibility in integrating the peak wavelengths from adjacent wavelengths. Both textile substrates and premium coated paper include fluorescence creating the sharp peak in the blue regions of reflectance plots shown in Fig. 1(d), (e) and (f). These datasets cover different printing conditions (such as printing process, inks and substrate used) and have at least 1485 colour samples each that are distinguishable under varied standard lighting conditions and has good coverage of the printable gamut. Figure 2 shows the plots of CIE a*b* coordinates of Munsell dataset (a) M1, newsprint datasets (b) N1 and (c) N2, web offset print on lightweight coated dataset (d) W1, textile datasets (e) T1 and (f) T2, and FOGRA51 dataset (g) F1, respectively.
Figure 1.
Plots of reflectances of newsprint datasets (a) N1 and (b) N2, web offset print on lightweight coated dataset (c) W1, textile datasets (d) T1 and (e) T2, and FOGRA51 dataset (f) F1, respectively.
Figure 2.
Plots of CIE ab coordinates of Munsell dataset (a) M1, newsprint datasets (b) N1 and (c) N2, web offset print on lightweight coated dataset (d) W1, textile datasets (e) T1 and (f) T2, and FOGRA51 dataset (g) F1, respectively.
Table II.
Description of spectral reflectance datasets.
DatasetSubstrate (L*, a*, b*)No. of samplesSpectral rangeUse in Test 1Use in Test 2
Munsell glossy colour chip (M1)1600380 nm–780 nm interval 5 nmTraining & testTraining
Offset litho on premium coated (F1)(95, 1.5, − 6)1617380 nm–730 nm interval 10 nmTraining & testTraining
Web offset on lightweight coated (W1)(88.8, − 0.18, 3.7)1600400 nm–700 nm interval 10 nmTraining & testTraining
Cold-set offset on newsprint (N1)(81.9, − 0.79, 5.08)1485380 nm–730 nm interval 10 nmTraining & testTraining & test
Cold-set offset on newsprint (N2)(82.9, 0.31, 4.45)1485380 nm–730 nm interval 10 nmTraining & testTraining & test
Digital print on textile (T1)(87, 4.55, − 19.33)1485380 nm–780 nm interval 10 nmTraining & testTraining & test
Digital print on textile (T2)(94.52, 2.26, − 14.7)1485380 nm–780 nm interval 10 nmTraining & testTraining & test
As well as evaluating spectral estimation for print datasets, a preliminary evaluation of the use of spectral estimation of radiance spectra of displays from tristimulus values was carried out. Two datasets (E1 & E2) of different LED displays calibrated to D50 RGB primaries were used. These are emission spectra, and therefore, additive. The datasets contain spectral emission measured from an RGB ramp, 0–255 in steps of 1, of blue, green, red and white respectively. The emission spectra were measured by a spectrophotometer in the spectral range between 380 nm–730 nm with 10 nm interval which were interpolated to 1 nm interval. Additive mixing of the primaries was used to create additional spectra that have a different mix of blue, green and red primaries together and are added to the respective datasets, and their corresponding tristimulus values are calculated using these radiance spectra and CIE 1931 2 Standard Observer.
4.2
Tristimulus Value Calculation
The tristimulus value calculations must use spectral range of 360 nm–830 nm in steps of 1 nm [60]. However, in many practical applications such reflectance measurements are obtained in different, often truncated ranges and intervals larger than 5 nm (often 10 nm). As shown in Table II, the datasets used here were measured over (380 nm–780 nm in intervals of 5 nm and 10 nm), (380 nm–730 nm in intervals of 10 nm) and (400 nm–700 nm in intervals of 10 nm). It has been shown that the spectral range of 400 nm–700 nm can produce tristimulus values with large errors when compared to the procedure for calculating tristimulus values recommended by CIE [61, 62]. For this reason, reflectance measurements were extrapolated to the range 780 nm–730 nm using the linear interpolation recommended by CIE [63]. It was found that spectral power distributions of illuminants with narrow peaks, particularly certain fluorescent illuminants, cannot be interpolated to 10 nm from 1 nm without creating large errors in the resulting tristimulus values [60, 64]. Therefore, optimum weighting tables [65] of 10 nm interval were generated using the CIE illuminant and observer functions to ensure that the tristimulus values were correctly calculated [60]. These optimum weights have bandpass corrections applied to the weighting factors which are basically the products of the illuminant multiplied to the observer functions. These tables can be directly applied to the reflectances to get tristimulus values.
4.3
Evaluation Metrics
The measured and their corresponding estimated reflectances are metameric under illuminant D50 as the tristimulus values used for spectral estimation are calculated using illuminant D50 and CIE 1931 2 Standard Observer. Therefore, to evaluate the spectral estimation methods based on their colorimetric performance, the colour differences ΔEab calculated between measured and estimated colorimetric values computed using the test and estimated reflectances and different illuminants other than illuminant D50 are reported. These colour differences represent the metameric difference and are a good approximation of how two metameric reflectances that match in colour under one illuminant can be different in colour produced under another illuminant [66].
A measure of colour inconstancy, i.e., colour difference ΔE00 between tristimulus values of a spectra (measured or estimated) obtained under a source and a destination illuminant, respectively, are reported. Colour inconstancy [67] is a measure of the change in colorimetry of an object under a destination illuminant compared to its colorimetry under a source illuminant. In product manufacturing, a colour constant material is preferred [67]. Therefore, we assess how colour inconstancy estimated reflectances are compared to the reference reflectance colour inconstancy. We consider, equal or lower colour inconstancy values to indicate a better performance of the spectral estimation.
To assess the colorimetric performance of spectral estimation of display datasets, observer metamerism difference that is denoted by ΔOM is reported. In this case, light energy emitted by the display is measured i.e. emission spectra are used, therefore, observer metamerism difference was obtained instead of metameric difference based on illuminants obtained for surface reflectances. ΔOM is calculated by taking the CIE 1931 2 Standard Observer as the reference observer and the CIE 1964 10 Standard Observer as the test observer since both these functions differ specially in the blue region and also in the degree of visual fields used [68].
The spectral performance of the spectral estimation methods is reported as the root mean square error (RMSE), which is scale dependent, i.e., it is a good metric to compare the accuracy of datasets with similar scale of reflectance values but not otherwise. RMSE is expressed in Eq. (13), where S is measured reflectance, Ŝ is the estimated reflectance and n is the number of wavelengths.
(13)
RMSE=j=1n(Sj ̂Sj)2n.
Another spectral metric, goodness-of-fit coefficient (GFC) is based on Schwartz’s inequality [69]. The GFC ranges from 0–1 and 1 corresponds to the estimated spectrum being equal to the original spectrum. It was also found by Hernandez et al. that GFC ≥ 0.995 is required to achieve a colorimetrically accurate estimate spectrum. What they call “good” spectral fit requires GFC ≥ 0.999 and an “excellent” fit requires a GFC ≥ 0.9999.
4.4
Test 1
In Test 1, each spectral reflectance dataset in Table II was divided into training and test datasets using k-cross validation, where k = 5 i.e. 80% of the dataset was used for training and the remaining 20% was used for testing in each iteration. For each spectral estimation method, the mean and maximum values of each metric were calculated for the estimated reflectances obtained from each dataset. This test evaluates the spectral estimation methods for the case where the reflectances used for training belong to the same dataset as the test data, for example having the same printing process, inks and substrate etc. RMSE, GFC and ΔEab between the measured and estimated spectra are determined.
4.5
Test 2
In Test 2, the aim is to assess the spectral estimation methods when the training and test datasets are different. To train the spectral estimation models in this scenario, a reflectance dataset with material components different from the test dataset to varying degrees was used. This is to understand how close the material match of a training dataset has to be, in order to minimize metameric mismatch under different lighting conditions. Nine different combinations of training and test datasets, as described in Table III, were evaluated using all the different spectral estimation methods, and the results of ΔEab and RMSE are reported.
Table III.
Description of nine cases evaluated in Test 2 where reflectance is estimated for a test dataset using the spectral estimation methods trained with another dataset.
IdentifierTest dataTraining dataPrinting conditionsSubstrates (Test/Training)
N2–M1N2M1DissimilarNewsprint/Colour chip
N2–W1N2W1DissimilarNewsprint/Light coated paper
N2–N1N2T1SimilarNewsprint/Newsprint
N1–N2N1N2SimilarNewsprint/Newsprint
T2–M1T2M1DissimilarTextile/Colour chip
T2–F1T2F1DissimilarTextile/Premium coated paper
T2–W1T2W1DissimilarTextile/Light coated paper
T2–T1T2N1SimilarTextile/Textile
T1–T2T1T2SimilarTextile/Textile
4.6
Test 3
In Test 3, spectral estimation of emission spectra are carried out using two display datasets; E1, measurements from standalone WLED display with quantum dot technology; and E2, measurements from a standalone WLED display with red phosphor and green and blue LEDs. Spectral estimation methods were evaluated for both cases; first, when the same dataset is used for training and testing through k-cross validation, and second, when one display dataset was used for training the model and the other dataset is used as the test. The RMSE and ΔOM are reported for both scenarios.
5.
Evaluation procedure for spectral estimation as a MAT
Two spectral estimation methods, weighted pseudo inverse (wPInverse) and third order polynomial (Polynomial 3) using least square fit were evaluated using various training and test datasets and their performance in predicting colour appearance under different illuminants were assessed. These two methods were chosen for their simplicity and computational efficiency, as well as their performance in terms of least root RMSE and metameric differences compared to other classical methods, like Wiener method, methods using PCA and Wpt based spectral estimation demonstrated in Section 6.
Seven datasets, as shown in Table II were selected for training and test purposes. These datasets contain reflectances of Munsell colour chips, and different print datasets with varying material components such as substrates, inks and processes. These datasets are be categorised according to their material components. The main purpose is to understand, how well such spectral estimation models perform in generating estimated reflectances that are a close match to those of the object of interest, and whether these estimates predict colours under different illuminants more accurately than colour predictions using alternative sensor adjustment transforms. A Munsell dataset (M1), a newsprint dataset (N2) and a textile print dataset (T2) were employed in the estimation using k = 5 cross validation method, where a dataset is divided into 5 sets, and one set i.e. 20% is used for testing and the rest 80% is used for training and this is repeated for each set. Six other cases where the training dataset does not belong to the test dataset were also used. N2 is used as a test dataset, while the N1, W1 and M1 datasets were used for training the spectral estimation models. Similarly, T2 was used as a test dataset and T1, F1 and M1 datasets were used for training.
The results from the two estimation methods were compared with predictions from different CATs and MATs. These colour predictions were evaluated under five CIE illuminants; two standard daylight illuminants D50 and D65, illuminant A representing tungsten filament lighting, F11 representing fluorescent lamp and LED-V1 representing violet pumped phosphor-type LED lamp. LED-V1 is part of the illuminants representing typical LED lamps published by the CIE in 2018 [60]. Fifteen combinations of these illuminants are used where each illuminant is the source and for each source, three other illuminants are used as the destination. These chosen combinations of (Source - Destination) illuminants are (D50-D65), (D50-A), (D50-F11), (D65-D50), (D65-A), (D65-F11), (A-D50), (A-D65), (A-F11), (F11-D50), (F11-D65), (F11-A), (LED-V1-D50), (LED-V1-D65) and (LED-V1-A). For each pair of source and destination illuminants, the tristimulus values of the test datasets were calculated by using their measured reflectances, source illuminant and the CIE 1931 2 Standard Observer. They were then chromatically adapted to the destination illuminant using different CATs; the Linear Bradford transform (L-Bradford), CAT02 and CAT16 with one step and two step via equi-energy transforms (with full adaptation), Oleari MAT, Waypoint MAT, and Burns CAT by spectral estimation (Burns-CAT). Colour predictions using estimated reflectances from the intermediate spectral estimation step (Burns-R) in Burns CAT were also compared. For comparisons, colour difference ΔE00 was calculated between the destination tristimulus values obtained from the measured reflectances and the adapted or estimated tristimulus values under the destination illuminant. This gives us a measure of metameric mismatch between the reference and adapted or estimated colours with changing lighting conditions.
It is important to note that the spectral estimation methods, estimated reflectances from tristimulus values obtained under the respective source illuminant colorimetry, so the estimated reflectances were metameric matches under the source illuminant, i.e., the ΔE00, in this case is 0. This allows for a more accurate assessment of the performance of the spectral estimation methods, as the estimated reflectances are obtained from the same source colorimetry as the adapted colours.
Moreover, tristimulus values were calculated from spectral data with 10 nm interval using CIE recommendations [70], where optimum weights tables [65] were created from CIE 1 nm data of illuminants and colour matching functions, as explained previously.
6.
Results and analysis of Spectral Estimation Methods
6.1
Test 1
In Test 1, each spectral reflectance dataset in Table II was divided into training and test datasets and the identifiers used to denote test and training data are defined as (Test-Training) e.g. (M1-M1). Tables IV and V show the mean and maximum RMSE, respectively, calculated between the measured reflectances and the estimated reflectances obtained from Test 1. Datasets with lower variability and smoother spectra, such as the newsprint datasets are better estimated. Among all the datasets, M1 has the highest mean and maximum RMSE, and this is thought to be due to greater pigment variation represented in Munsell samples compared to print datasets which use only four process colours. For each dataset, the value shown underlined and in bold is the best case, while the value shown in bold but not underlined is the second best case among the results obtained from different spectral estimation methods. Polynomial 3 performs the best, as it has either the least or the next least mean RMSE and maximum RMSE values. wPInverse is the next best method based on the frequency of least mean and maximum RMSE. Also, the mean RMSE results from Polynomial 2 and wPCA are comparable to the best cases. However, the maximum RMSE from wPCA is significantly worse for the M1 dataset. ΔEab under illuminant D50 calculated between the measured and estimated reflectances is practically 0, as the reflectances were estimated from tristimulus values with illuminant D50 colorimetry. In this particular application using print datasets, the ΔEab under illuminant D50 colorimetry between measured and estimated reflectances was in the range 1E-12 or lower. Table VI shows the mean GFC calculated between the measured reflectances and the estimated reflectances obtained from Test 1. According to GFC results, Polynomial 3 performs the best, followed by wPCA and wPInverse methods. All print datasets that produce colours mixing process colours or primaries have mean GFC for these three methods in acceptable limits while M1 data has mean GFC lower than 0.995. Newsprint data with smaller colour gamut has mean GFC greater than 0.999 for these three methods which is considered good.
Table IV.
Mean RMSE between test and estimated spectra from Test 1.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
M1–M10.04100.03950.02780.0381 0.0236 0.02860.02490.0405
F1–F10.03370.02120.00950.02460.00910.0094 0.0072 0.0364
W1–W10.02410.01580.00730.01830.00630.0076 0.0062 0.0440
N1–N10.02200.01200.00580.01350.00560.0055 0.0047 0.0397
N2–N20.02160.01240.00590.01390.00600.0056 0.0046 0.0572
T1–T10.03300.02540.01180.02750.01260.0140 0.0105 0.0476
T2–T20.03940.02980.01390.03190.01320.0158 0.0119 0.0420
Table V.
Maximum RMSE between test and estimated spectra from Test 1.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
M1–M10.30980.29441.16960.30800.3060 0.2919 0.29220.3057
F1–F10.10120.07520.04690.0957 0.0310 0.03770.03480.1646
W1–W10.08830.07200.04360.08380.03130.0288 0.0263 0.2485
N1–N10.06620.04480.02670.0529 0.0180 0.02180.02070.1085
N2–N20.06890.05150.02670.0587 0.0200 0.02320.02240.1440
T1–T10.11720.09440.04270.10640.03450.0521 0.0297 0.1962
T2–T20.17670.13450.05280.16080.05710.0735 0.0390 0.2233
Table VI.
Mean GFC between test and estimated spectra from Test 1.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
M1–M10.97780.98280.97240.9815 0.9896 0.98630.98850.9861
W1–W10.98420.9831 0.9984 0.98930.99760.99550.99830.9913
F1–F10.98700.9887 0.9986 0.99180.99840.9975 0.9986 0.9892
N2–N10.99620.99850.99970.99850.99970.9997 0.9998 0.9904
N1–N20.99600.99840.99970.99830.99960.9997 0.9998 0.9881
T1–T10.98260.9909 0.9985 0.98680.99400.99700.99830.9850
T2–T20.98050.99000.99680.98820.99720.9969 0.9982 0.9887
Tables VII and VIII show mean (maximum) ΔEab as the measure of metameric difference under destination lights, illuminant D65 and illuminant A, respectively, for estimated reflectances obtained using source illuminant D50 discussed in Test 1. Metameric difference increases when the spectral power distribution of the destination illuminant is very different from that of the source illuminant used for estimation. Under illuminant D65, mean ΔEab is below 1 for each spectral estimation method, however, maximum ΔEab under illuminant D65 vary significantly among the different methods. Polynomial 3 performs the best as it results in the least metameric mismatch in every case. In this case, the maximum ΔEab for Polynomial 3 under illuminant D65 is within 1 and under illuminant A is within 3 ΔEab except for the M1-M1 case. wPInverse is the second best based on least metameric values, although, Polynomial 2 and wPCA metameric differences are comparable. The Waypoint-R method has the second best mean and maximum ΔEab under both illuminant D65 and A for dataset M1.
Table VII.
Mean (max) ΔEab, metameric difference under illuminant D65 from Test 1.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
M1–M10.59 (4.46)0.65 (4.02)0.3 (3.04)0.6 (4.33)0.32 (2.83)0.44 (3.04) 0.32 (2.38) 0.33 (2.52)
F1–F10.99 (3.99)0.84 (5.53)0.28 (1.5)0.71 (3.87)0.29 (1.27)0.34 (2.02) 0.23 (0.96) 0.48 (1.76)
W1–W10.82 (3.44)0.69 (4.38)0.27 (1.51)0.62 (3.26)0.25 (1.28)0.31 (1.78) 0.24 (1.07) 0.96 (3.92)
N1–N10.41 (1.42)0.28 (1.04)0.13 (0.6)0.27 (1.17)0.12 (0.43)0.12 (0.48) 0.1 (0.46) 0.66 (1.18)
N2–N20.38 (1.49)0.3 (1.16)0.13 (0.6)0.29 (1.3)0.13 (0.5)0.13 (0.51) 0.1 (0.51) 0.83 (1.86)
T1–T10.66 (3.07)0.66 (2.59)0.25 (1.27)0.6 (2.91)0.25 (1.35)0.35 (1.4) 0.23 (1.04) 0.61 (2.08)
T2–T20.79 (3.42)0.85 (2.78)0.34 (1.35)0.71 (3.15)0.32 (1.1)0.42 (1.69) 0.31 (0.95) 0.63 (1.95)
Table VIII.
Mean (max) ΔEab, metameric difference under illuminant A from Test 1.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
M1–M11.50 (12.31)1.67 (12.09)0.83 (13.8)1.52 (12.08)0.81 (8.26)1.13 (9.47) 0.81 (7.07) 0.83 (6.78)
F1–F10.99 (15.04)2.40 (14.39)0.82 (5.61)2.10 (14.43)0.83 (4.41)0.95 (5.68) 0.66 (2.68) 1.40 (6.71)
W1–W10.82 (11.95)1.97 (12.08)0.76 (5.46)1.79 (11.33)0.68 (3.38)0.85 (4.98) 0.67 (3) 2.74 (11.05)
N1–N10.41 (4.86)0.83 (3.5)0.38 (1.86)0.81 (4.01)0.36 (1.34)0.36 (1.43) 0.31 (1.41) 1.63 (3.52)
N2–N20.38 (5.33)0.88 (4.01)0.39 (1.84)0.84 (4.6)0.37 (1.52)0.37 (1.56) 0.30 (1.55) 2.20 (5.29)
T1–T10.66 (9.95)1.71 (8.02)0.62 (3.64)1.56 (9.29)0.57 (4.07)0.84 (3.46) 0.54 (2.2) 1.24 (6.49)
T2–T20.79 (11.03)2.21 (8.75)0.82 (4.15)1.84 (10.02)0.74 (3.4)1.00 (3.97) 0.71 (2.03) 1.35 (5.5)
6.2
Test 2
In Test 2, reflectance datasets that were not part of the test dataset were used to train different spectral estimation models. Here, the identifiers used to denote test and training data are from two different datasets and are defined as (Test-Training) e.g. (N2-M1). Tables IX and X summarise the mean and maximum RMSE between reference and estimated reflectances from Test 2. For newsprint datasets N1 and N2, when a different newsprint dataset is used for training then mean RMSE is the lowest and Polynomial 3 performs the best. Similarly, for textile datasets T1 and T2, when a different textile dataset is used for training then the mean RMSE is the lowest for wPInverse. Reflectances of textile datasets have sharper peaks in the blue region due to the presence of fluorescent whitening agents, and the accuracy of the least square fit reduces in comparison to newsprint datasets. When the W1 dataset is used to estimate reflectances for a newsprint dataset, Polynomial 2, Polynomial 3 and wPInverse perform similarly, although the mean maximum RMSE increase compared to using a newsprint dataset for training. Interestingly, when F1 dataset is used to estimate reflectances for textile dataset, the mean maximum RMSE are comparable to the case when a textile dataset is used for training. Both textile and F1 datasets have fluorescent whitening agents, although the fluorescent emission is greater for textile.
Table IX.
Mean RMSE between test and estimated spectra from Test 2.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
N2–M10.0585 0.0489 0.05570.05160.05420.05860.05950.0573
N2–W10.03310.01980.01860.01990.0175 0.0174 0.01760.0513
N2–N10.03030.01780.01400.01900.01350.01370.01320.0572
N1–N20.02310.01800.01390.01870.01400.0136 0.0132 0.0482
T2–M10.05870.05710.06170.0564 0.0539 0.05870.06110.0646
T2–F10.04170.04110.03490.0375 0.0311 0.03200.03130.0478
T2–W10.04160.04270.03850.04050.0357 0.0346 0.03620.0561
T2–T10.04620.03640.03240.0385 0.0291 0.03200.03270.0646
T1–T20.03150.04010.03590.0354 0.0311 0.03240.03320.0758
Table X.
Maximum RMSE between test and estimated spectra from Test 2.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
N2–M10.13600.11810.13620.12920.13140.14270.14820.1419
N2–W10.08470.05660.05760.06570.05520.05530.05350.2098
N2–N10.08800.06670.03790.07470.03700.04120.03930.1440
N1–N20.04910.05300.03710.06320.03850.03730.03700.1182
T2–M10.21980.17960.28530.20980.16590.19130.28050.2637
T2–F10.14580.12750.12080.14710.12410.11550.12080.2518
T2–W10.15490.16470.15910.18810.16880.14560.18860.2708
T2–T10.20030.17700.12770.18880.12810.12250.10450.2637
T1–T20.09230.11150.09420.11840.09200.11480.10840.2713
Tables XI, XII, XIII and XIV summarise the mean and maximum ΔEab as metameric difference under CIE illuminants D65, A, F11 (typical fluorescent lamp) and LED-V1 (violet pumped phosphor-type LED lamp) [60], respectively, obtained from Test 2. If we consider, metameric differences under the D65 illuminant, Polynomial 3 creates the smallest metameric mismatch overall. wPInverse is the next best method while Polynomial 2 and wPCA results closely follow. A similar trend is seen in metameric differences under illuminants A and F11.
Table XI.
Mean (max) ΔEab, metameric difference under illuminant D65 from Test 2.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
N2–M11.04 (2.62)0.84 (2.31)0.91 (1.78)0.93 (2.49)0.93 (1.9)1.02 (1.95)1.03 (1.73) 0.90 (2)
N2–W10.57 (1.77)0.33 (1.29)0.27 (1.16)0.30 (1.29) 0.20 (0.7) 0.23 (0.86)0.22 (0.84)0.83 (2.99)
N2–N10.5 (1.65)0.31 (1.23)0.17 (0.7)0.31 (1.42)0.16 (0.55)0.17 (0.43) 0.14 (0.4) 0.83 (1.86)
N1–N20.32 (1.27)0.3 (0.96)0.15 (0.65)0.28 (1.06)0.15 (0.47)0.16 (0.58) 0.14 (0.57) 0.76 (1.77)
T2–M11.12 (3.78)1 (3.68)0.92 (3.04)1.03 (3.7)0.88 (2.63)0.98 (2.94)0.93 (2.52) 0.86 (3.12)
T2–F11.15 (4.23)1.09 (3.2)0.73 (2.32)0.89 (3.91)0.64 (2.25)0.67 (2.12) 0.61 (1.99) 0.77 (2.69)
T2–W10.86 (3.79)0.95 (3.14)0.73 (2.52)0.80 (3.46)0.63 (2.03) 0.61 (2.06) 0.65 (2.23)0.99 (3.93)
T2–T10.87 (3.56)0.79 (2.96)0.47 (1.68)0.75 (3.2)0.44 (1.51)0.48 (1.85) 0.44 (1.37) 0.86 (3.12)
T1–T20.63 (3.23)0.81 (2.66)0.5 (1.22)0.65 (2.95)0.47 (1.51)0.44 (1.57) 0.43 (1.51) 0.76 (2.77)
Table XII.
Mean (max) ΔEab, metameric difference under illuminant A from Test 2.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
N2–M12.83 (6.43) 2.26 (5.97) 2.39 (5.01)2.50 (6.11)2.48 (5.15)2.72 (5.07)2.74 (4.94)2.35 (5.13)
N2–W11.66 (5.62)0.93 (3.84)0.73 (3.27)0.86 (4.62) 0.54 (1.85) 0.57 (2.25)0.57 (2.14)2.31 (8.9)
N2–N11.61 (5.77)0.92 (4.21)0.51 (2.21)0.92 (4.93)0.47 (1.92)0.49 (1.25) 0.42 (1.22) 2.20 (5.29)
N1–N21 (4.47)0.87 (3.22)0.44 (2.04)0.81 (3.71)0.42 (1.4)0.45 (1.75) 0.40 (1.73) 1.99 (4.95)
T2–M12.61 (11.35)2.4 (11.16)2.05 (9.04)2.43 (11.07)1.93 (7.19)2.25 (8.87)2.14 (6.84) 1.78 (7.75)
T2–F12.62 (12.57)2.79 (8.8)1.78 (5.63)2.06 (11.37)1.34 (5.31)1.49 (4.77) 1.34 (4.24) 1.48 (6.97)
T2–W12.1 (11.31)2.47 (8.86)1.89 (7.24)1.98 (10.21)1.54 (5.72) 1.41 (4.85) 1.64 (6.87)2.47 (10.5)
T2–T12.54 (10.35)2.11 (8.28)1.21(4.51)2.04 (9.74)1.11 (3.54)1.23 (3.73) 1.12 (3.3) 1.78 (7.75)
T1–T21.65 (10.27)2.19 (8.05)1.37 (3.45)1.75 (9.28)1.26 (3.38)1.18 (3.89) 1.14 (3.22) 1.57 (6.94)
The M1 dataset has reflectances that are not spectrally similar to newsprint or textile reflectances. When M1 dataset is used as training data Waypoint-R performs the best. If we exclude the cases where M1 is used for training data then it can be seen that polynomial methods and weights based methods all produce mean ΔEab less than 1 under the four different illuminants for newsprint datasets. Similarly, for textile datasets, if M1 and W1 training datasets are excluded then mean ΔEab is below 2.4 under all four illuminants when polynomial methods and weights based methods are used, although in this case the maximum ΔEab is high under illuminant F11. This is because the polynomial and the weight based spectral estimation methods model the non-linearity present in print datasets well.
The estimation results with textile dataset has slightly increased metameric differences because the test reflectances have sharper peaks in the blue region and they are not as smooth as newsprint data; as a result there is an increase in sensitivity to colour shift under different illuminants and especially affects accurate hue prediction and chroma. This can be seen in Figure 3 where (a), (b) and (c) are measured CIELAB L*, C* and hab values plotted against estimated CIELAB L*, C* and hab values, respectively, from estimated reflectances of T2 using M1 as the training dataset. CIELAB L*, C* and hab values have been calculated under illuminants, D65 (blue dots), A (red dots) and F11 (yellow dots). From the plot it can be seen that the largest deviations occur for Hue and Chroma under illuminant F11 (yellow dots). Lightness values are mostly preserved. This similar trend is seen in other cases, however, deviations decrease depending on increasing performance of spectral estimation due to a good selection of the training dataset i.e. a better material match.
Table XIII.
Mean (max) ΔEab, metameric difference under illuminant F11 from Test 2.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
N2–M11.78 (7.93)1.52 (7.49)1.49 (5.4)1.66 (7.83)1.60 (5.72)1.57 (5.44)1.82 (5.75) 1.51 (7.32)
N2–W11.23 (6.28)1.23 (4.43)1.07 (3.76)1.11 (5.39) 0.87 (3.9) 0.96 (3.82)0.94 (3.71)1.94 (10.71)
N2–N11.43 (7.68)1.03 (5.57)0.61 (3.29)1.01 (6.61)0.58 (3.04)0.59 (2.79) 0.52 (1.64) 0.98 (5.42)
N1–N22.01 (6.41)0.91 (4.18)0.57 (1.73)1.01 (5.16)0.57 (1.63)0.62 (1.86) 0.54 (1.41) 1.22 (4.15)
T2–M13.12 (16.83)2.89 (16.12)2.80 (13.43)3.03 (16.73) 2.73 (13.53) 3.14 (13.82)2.99 (12.77)3.00 (13.92)
T2–F11.90 (15.95)1.79 (12)1.82 (7.83)1.70 (14.98) 1.19 (9.07) 1.63 (8.37)1.62 (7.83)1.84 (13.92)
T2–W13.35 (15.58)2.90 (13.01)2.39 (10.19)2.71 (14.54) 2.17 (10.07) 2.48 (10.18)2.26 (9.82)3.17 (14.92)
T2–T13.75 (15.13)2.88 (12.99)2.22 (8.12)3.11 (14.65)2.23 (8.13)2.31 (7.82) 2.06 (8.45) 2.80 (12.01)
T1–T22.93 (12.69)3.14 (10.15)2.37 (5.56)2.36 (11.88)2.17 (4.9)2.24 (5.61) 2.08 (6.4) 2.11 (7.14)
Table XIV.
Mean (max) ΔEab, metameric difference under illuminant LED-V1 from Test 2.
DatasetWienerPCAwPCAPInversewPInversePolynomial 2Polynomial 3Waypoint-R
N2–M12.65 (6.46)2.44 (6.18)2.39 (4.95)2.39 (6.17)2.38 (4.62)2.61 (4.82)2.61 (4.54) 2.14 (4.95)
N2–W11.55 (6.17)1.01 (4.16)0.80 (3.11)0.94 (5.19) 0.61 (2.07) 0.64 (2.14)0.64 (2.03)2.40 (8.44)
N2–N11.59 (6.39)0.93 (4.63)0.49 (2.37)0.95 (5.45)0.45 (2.09)0.46 (1.36) 0.39 (1.2) 1.77 (4.22)
N1–N21.24 (5.06)0.88 (3.57)0.43 (1.93)0.88 (4.17)0.43 (1.29)0.43 (1.77) 0.38 (1.6) 1.51 (3.43)
T2–M13.29 (11.56)3.27 (11.29)2.97 (8.84)3.21 (11.25)2.75 (7.1)3.00 (8.92)2.83 (6.71) 2.65 (6.87)
T2–F12.74 (13.21)3.08 (9.04)2.14 (6.76)2.33 (12.1)1.65 (5.59)1.81 (4.4)1.67 (4.59)1.95 (6.88)
T2–W12.45 (12)2.90 (9.29)2.33 (8.88)2.42 (10.93)1.95 (7.48) 1.83 (4.59) 1.99 (8.32)2.76 (9.58)
T2–T14.11 (12.2)2.63 (9.31)1.67 (7.21)3.04 (11.29)1.84 (6.63)1.64 (4.42) 1.52 (4.73) 2.40 (8.33)
T1–T22.20 (11.76)2.56 (8.82)1.81 (4.24)2.12 (10.66) 1.66 (4.09) 1.68 (4.9)1.66 (4.44)2.37 (6.35)
Table XV.
Mean ΔE00 as colour inconstancy measure between tristimulus values under source light and tristimulus values under destination light calculated using measured reflectance of the dataset for the reference case and estimated reflectance for the cases where spectral estimation is carried out for the dataset. Source illuminants are in the first row and destination illuminants are in the second row. The values shown in bold are the colour inconstancy found in dataset T2 and the values underlines show cases where colour inconstancy is similar to dataset T2.
D50D65AF11LED-V1
D65AF11D50AF11D50D65F11D50D65AD50D65A
Reference1.352.881.721.334.052.612.884.092.321.241.683.182.673.90.35
Waypoint-R1.362.831.531.3442.512.844.062.041.121.543.052.633.850.36
Burns-CAT1.352.791.741.374.113.112.663.891.040.781.711.311.982.860.2
M1–M1Burns-R1.452.881.571.444.222.362.724.022.411.471.743.722.423.770.26
wPInverse1.352.791.431.333.972.382.783.992.121.071.413.082.593.830.33
Polynomial 31.352.841.571.334.012.412.844.062.241.261.573.212.563.810.36
F1–F1Reference1.683.502.541.685.073.743.505.072.152.082.983.103.775.291.05
Reference1.703.322.561.704.943.783.324.942.391.982.963.403.695.111.48
T2–T2Burns-CAT1.492.91.861.54.373.352.794.231.010.962.141.572.323.450.17
Burns-R1.733.191.521.714.832.723.134.832.61.451.684.042.544.010.27
Waypoint-R1.572.91.611.564.42.752.884.421.131.563.182.563.850.40
T2–F1wPInverse1.653.392.601.654.943.703.434.962.072.853.312.923.474.861.11
Polynomial 31.673.432.631.675.003.80 3.475.032.112.783.392.983.574.861.06
Waypoint-R1.542.861.291.534.32.52.884.372.061.131.563.182.563.850.40
T2–M1wPInverse1.512.791.421.514.212.572.794.232.071.221.373.102.493.770.37
Polynomial 31.52.821.721.54.232.682.824.232.241.371.633.212.543.740.38
Figure 3.
(a), (b) and (c) are measured CIELAB L*, C* and hab values plotted against estimated CIELAB L*, C* and hab values respectively with CIELAB L*, C* and hab values calculated under illuminant D65 (blue dots), A (red dots) and F11(yellow dots) using estimated reflectances of test dataset T2 and training dataset M1 and Polynomial 3 spectral estimation method.
Figure 4 shows reflectances corresponding to the 95th percentile of ΔEab, metameric difference under illuminant D65 for newsprint dataset, where reference reflectances (red) and estimated reflectances (black) are plotted for cases (a) N2-N1 using wPInverse, (b) N2-N1 using Polynomial 3, (c) N2-M1 using wPInverse and (d) N2-M1 using Polynomial 3, respectively. Similarly, Figure 5 shows reflectances corresponding to the 95th percentile of ΔEab as metameric difference under illuminant D65 for textile dataset, where reference reflectances (red) and estimated reflectances (black) are plotted for cases (a) T2-T1 using wPInverse, (b) T2-T1 using Polynomial 3, (c) T2-M1 using wPInverse and (d) T2-M1 using Polynomial 3, respectively. From these figures it can be clearly seen that newsprint dataset (N2) with smoother and broader peaks and smaller gamut as seen in Fig. 2 is better estimated compared to textile dataset (T2) with fluorescent whitening agents. Plots (a) and (b) where training data is similar to the test data vs plots (c) and (d) where training data is different from test data in Figs. 4 and 5 suggest that when training data has similar material characteristics to test data then spectral estimation errors decrease.
Figure 4.
Plots of newsprint reference reflectances (red) and estimated reflectances (black) for (a) N2-N1 using wPInverse, (b) N2-N1 using Polynomial 3, (c) N2-M1 using wPInverse and (d) N2-M1 using Polynomial 3, respectively.
Figure 5.
Plots of textile reference reflectances (red) and estimated reflectances (black) for (a) T2-T1 using wPInverse, (b) T2-T1 using Polynomial 3, (c) T2-M1 using wPInverse and (d) T2-M1 using Polynomial 3, respectively.
6.3
Test 3
Test 3 was carried out to evaluate different spectral estimation methods on radiance spectra of display datasets. When spectral estimation is carried out with training and test data belonging to the same dataset then spectral estimation results are very good with overall mean RMSE at 0.0142 and the 95th percentile of observer metamerism difference being almost negligible with ΔEab = 0.1 for both E1 and E2 datasets. If radiance spectra of another display dataset is used, that we consider having similar technology then all the spectral estimation methods perform equally poorly with mean RMSE at 1.28 and overall mean observer metamerism difference being ΔEab = 1.88 and the 95th percentile of observer metamerism difference being ΔEab = 4.6. Because display radiance has peaks that are narrow and sharp, a little inaccuracy or shift leads to major mismatch in colorimetry for changing observers. Although, spectral estimation results are good for the case where training and test data are from the same dataset, the prediction of radiance spectra by additive mixing of the primaries also works well. This is because in display technology, additive color mixing is used, which involves combining the primary colours; red, green, and blue (RGB), at different intensities to generate a broad spectrum of colours. This means that the spectral power distribution emitted by the display can be expressed as a linear combination of these primaries, i.e., the intensities of the light sources can be adjusted using channelwise nonlinearities or gamma correction in conjunction with a linear transformation [71]. While printers use subtractive colour mixing, which means that when the primary colors are combined, they absorb certain wavelengths of light, creating different colors. This mixing considers factors like ink absorption, density, and printing technology, leading to a complex non-linear relationship between the output color and the printer primaries. As a result, we do not further consider evaluating spectral estimation workflows in colour management for display technologies.
6.4
Discussion on Colour Inconstancy
Colour inconstancy is an important measure to understand how colour of an object changes under varying lighting conditions. Reducing colour inconstancy is a step towards creating colour constant objects. As we are matching the object reflectance, the aim here is to match the colour inconstancy of the reflectances in the original dataset. Table XV shows the mean ΔE00 between tristimulus values predicted using reflectances and a source illuminant, and their respective tristimulus values predicted under a destination illuminant. This measure represents the overall colour inconstancy present in the dataset. The mean ΔE00 values for dataset M1 from all the estimation methods are comparable to the reference mean colour inconstancy values. For Burns-CAT it is observed that because of the final step of luminance scaling, there is a tendency of reducing colour inconstancy. The Burns-R spectral estimation method produces the same reflectance for a given tristimulus value and its illuminant, while the other methods produce different metameric reflectance estimates depending on the training dataset. The F1 and T2 datasets have similar overall mean colour inconstancy while M1 is slightly more colour constant facilitated by smoother and broader reflectances reducing colour shifts between certain pair of illuminants. If F1 is used as the training dataset for reflectance estimation of T2, then wPInverse and Polynomial 3 results match the reference colour inconstancy of T2 very well. If M1 is used as training dataset for T2, then colour inconstancy is reduced and they behave similar to the results obtained from Burns two methods and Waypoint-R. A similar trend is seen when newsprint dataset N2 is estimated using W1 and M1 dataset, where W1 and N2 are similarly colour inconstant and M1 is less colour inconstant although the difference is not as significant as in the case of textile dataset. If the aim is to match material then the training dataset should be similar in material components to the test dataset and if the aim is to attain colour constancy then a dataset with reflectance with smooth and broad peaks, that has the least colour inconstancy and high variability can be used.
Table XVI.
Mean and 95th percentile of ΔE00 between reference tristimulus values and chromatically adapted/estimated tristimulus values for M1 dataset. Source illuminants are in the first row and destination illuminants are in the second row.
Mean ΔE00
D50D65AF11LED-V1
D65AF11D50AF11D50D65F11D50D65AD50D65A
M1–M1CAT020.592.011.870.612.631.921.892.372.631.931.992.661.982.450.56
CAT160.782.731.930.803.562.232.553.242.872.002.332.992.643.310.54
L-Bradford0.521.741.930.532.251.981.732.182.351.992.072.321.692.110.55
Oleari-MAT0.611.772.490.592.132.241.842.283.162.442.203.01
Waypoint-MAT0.330.801.340.331.091.420.871.221.551.361.401.50
Burns-CAT0.551.732.030.562.282.231.732.252.432.052.242.361.902.390.55
Burns-R0.270.681.220.260.921.210.821.141.661.421.441.570.720.960.34
wPInverse0.190.490.870.190.660.940.530.740.920.890.950.860.490.660.2
Polynomial 30.210.540.980.20.731.050.580.791.060.971.051.010.540.720.23
95th Percentile ΔE00
M1-M1CAT021.374.485.081.416.065.344.045.265.545.325.565.754.555.611.24
CAT161.695.775.241.717.665.785.356.935.985.506.136.545.897.361.23
L-Bradford1.203.565.251.224.855.443.484.545.185.375.625.123.694.691.23
Oleari-MAT1.183.204.711.124.044.503.524.797.594.474.496.02
Waypoint-MAT1.102.633.821.053.533.952.874.064.793.773.914.72
Burns-CAT1.374.345.031.375.685.374.415.705.455.125.485.244.986.381.25
Burns-R0.651.753.430.602.263.092.203.015.033.633.723.801.892.610.86
wPInverse0.481.322.480.471.712.691.381.902.912.492.682.571.231.720.46
Polynomial 30.621.692.470.642.282.821.742.392.812.402.752.571.522.090.67
7.
Results and Analysis of Spectral Estimation as a MAT
The estimated reflectances from tristimulus values under the source illuminant obtained using the two spectral estimation methods were then used to find tristimulus values under the destination illuminant as described in Section 3 above. Burns spectral estimation method (Burns-R) and different CATs and MATs are also used to predict destination tristimulus values from source tristimulus values. As stated in Section 3, pre-computed transformation matrices were used for Waypoint and Oleari MATs, but since transformation matrices to and from LED-V1 illuminant are not available, we leave these computations blank. The reference tristimulus values are XYZ values obtained using measured spectral reflectances and the respective destination illuminant. The 1931 2 Standard Observer was used as the colour matching function throughout. When full adaptation is used, the one-step and two-step CAT02 and CAT16 transforms produce identical tristimulus values at the precision shown, and hence the results for the two-step CAT02 and CAT16 are not shown. The test and training data used to obtain the estimated reflectances, in this case, are denoted by their identifiers as (test-training) e.g.: (M1-M1) where M1 is both test and training data or (N2-M1) where N2 is test data and M1 is training data. Their results as a MAT are discussed next.
Table XVI summarises the mean and 95th percentile of ΔE00 results for dataset M1, where part of the same dataset was used for training denoted by (M1-M1) i.e. (test data-training data). Between the two spectral estimation methods, wPInverse and Polynomial 3, their mean and 95th percentile of ΔE00 are comparable. They produce the least mean and 95th percentile of ΔE00 compared to other CATs and MATs. Burns-R and Waypoint-MAT mean results are comparable and they produce less metameric mismatch results compared to other CATs and Oleari-MAT, although they are higher than Polynomial 3 and wPInverse. Burns-CAT metameric mismatch results are comparable to CAT02.
Table XVII.
Mean and 95th percentile of ΔE00 between reference destination tristimulus values and adjusted tristimulus values obtained by Zhang et al. using Centroid method [45] for M1 dataset and results obtained using wPInverse and Polynomial 3 spectral estimation methods for (M1-M1), going from source illuminant A and F11 to destination illuminant D65, respectively.
Mean ΔE0095th percentile of ΔE00
MethodA-D65F11-D65A-D65F11-D65
Centroid method (Zhang et al.)1.271.823.424.99
wPInverse (M1–M1)0.740.951.902.68
Polynomial 3 (M1–M1)0.791.052.392.75
Zhang et al. [45], evaluated colour predictions under changing lights obtained using centroid of the metameric mismatch volume across Munsell dataset (M1) and found mean ΔE00 to be 1.27 and 95th percentile of ΔE00 to be 3.42 when destination illuminant is D65 and source illuminant is A as shown in Table XVII. While mean ΔE00 was 1.82 and 95th percentile of ΔE00 was 4.99 when destination illuminant is D65 and source illuminant is F11 as shown in Table XVII. For the same source and destination illuminant pairs, wPInverse and Polynomial 3 spectral estimation methods produce mean ΔE00 less than 1.03 and 95th percentile of ΔE00 less than 2.8. The two spectral estimation methods thus minimise metameric mismatch when the training data is similar to the test data, as is the case in M1-M1 in Table XVI.
Tables XVIII and XIX summarise mean and maximum ΔE00, respectively, obtained from different CATs and MATs, and estimated reflectances for newsprint dataset N2, under different combinations of source illuminants (first row) and destination illuminants (second row). The two spectral estimation methods wPInverse and Ploynomial 3 were used to perform spectral estimation using training datasets that were same (N2) or different (M1, W1, N1) from the test dataset N2 as described in Section 5. Both spectral estimation methods produce the smallest mean and maximum ΔE00 for the cases where the training dataset is either from the same dataset (N2-N2) or another newsprint dataset (N2-N1). The grand mean ΔE00 across all combinations of source and destination illuminant pairs is less than 0.5 while the average of the maximum ΔE00 values is less than 2. When the training dataset is different in material components from newsprint, such as (N2-W1), the mean and maximum ΔE00 of the two spectral estimation methods are still significantly better than other CATs and MATs. Their grand mean ΔE00 is less than 0.8 and the average of the maximum ΔE00 values is less than 3. When the training dataset is colour chips with higher variability in reflectances i.e. (N2- M1), then the metameric mismatch of estimated reflectances for the newsprint dataset increases. The mean and maximum ΔE00 in this case are comparable to the results obtained from other CATs, MATs and Burns-R. The overall good estimation results with less metameric mismatch might be because all the training and test datasets in this case have smooth reflectances with broader peaks.
Table XVIII.
Mean ΔE00 between reference tristimulus values and chromatically adapted/estimated tristimulus values for dataset N2 (newsprint). Source illuminants are in the first row and destination illuminants are in the second row.
D50D65AF11LED-V1
D65AF11D50AF11D50D65F11D50D65AD50D65AAvg
N2-N2CAT020.962.591.770.953.472.032.783.762.701.812.072.582.473.281.152.29
CAT161.022.831.871.013.762.252.974.012.701.922.322.622.693.561.172.45
L-Bradford0.932.381.810.913.212.102.623.602.531.852.162.382.203.001.142.19
Oleari-MAT0.531.752.540.531.982.791.761.991.452.572.781.561.85
Waypoint-MAT0.922.371.320.903.191.732.623.632.471.301.682.132.02
Burns-CAT0.962.611.850.943.462.222.833.852.581.872.252.452.523.341.112.32
Burns-R0.802.101.200.792.791.652.293.171.951.662.071.901.762.371.101.84
wPInverse0.110.340.370.110.450.460.370.480.260.390.470.250.360.460.080.33
Polynomial 30.10.290.330.10.380.420.320.420.20.350.430.20.310.40.040.29
N2-M1wPInverse0.782.111.180.772.811.332.313.162.461.181.332.141.822.471.011.79
Polynomial 30.872.341.320.853.141.482.553.482.961.281.432.522.002.711.132.00
N2-W1wPInverse0.180.480.740.170.650.790.520.700.780.770.800.770.520.640.250.58
Polynomial 30.200.540.810.200.720.880.580.780.790.830.860.790.560.700.250.63
N2-N1wPInverse0.140.410.490.140.540.560.440.580.490.520.570.500.400.520.120.43
Polynomial 30.130.390.440.130.520.490.430.560.510.460.50.510.370.490.110.40
Table XIX.
Maximum ΔE00 between reference tristimulus values and chromatically adapted/estimated tristimulus values for dataset N2 (newsprint). Source illuminants are in the first row and destination illuminants are in the second row.
D50D65AF11LED-V1
D65AF11D50AF11D50D65F11D50D65AD50D65AAvg
N2-N2CAT021.914.574.671.835.925.165.297.206.324.975.845.684.846.552.364.87
CAT161.815.094.841.736.775.605.266.976.125.206.505.766.217.632.345.19
L-Bradford1.804.544.781.755.875.385.306.995.435.106.184.784.326.152.364.72
Oleari-MAT1.243.795.141.184.855.933.894.963.525.305.933.124.07
Waypoint-MAT1.784.783.841.726.304.695.327.337.803.654.616.654.87
Burns-CAT1.794.644.961.716.185.605.196.896.495.346.656.135.316.802.365.07
Burns-R1.744.713.411.756.104.685.096.744.094.705.463.664.115.422.294.26
wPInverse0.621.692.260.632.262.691.972.531.142.392.651.121.742.290.481.76
Polynomial 30.722.021.970.752.662.652.433.030.992.202.720.892.272.880.201.89
N2–M1wPInverse1.754.773.231.696.234.315.327.156.943.134.035.594.526.042.144.46
Polynomial 31.935.333.741.907.074.625.897.827.693.664.315.754.956.702.284.91
N2–W1wPInverse0.932.602.701.093.674.272.143.062.432.402.812.462.082.710.992.42
Polynomial 31.042.683.081.033.493.963.274.232.403.434.212.403.044.000.912.88
N2–N1wPInverse0.581.762.010.572.312.521.942.441.522.182.621.541.852.270.421.77
Polynomial 30.571.612.170.582.102.691.912.411.552.362.711.531.852.350.381.78
Tables XX and XXI summarise mean and maximum ΔE00 obtained from different CATs and MATs and estimated reflectances for textile dataset T2, under different combinations of source illuminants (first row) and destination illuminants (second row). The two spectral estimation methods wPInverse and Polynomial 3 were used to perform spectral estimation using training datasets that were same (T2) or different (M1, F1, T1) from the test dataset T2 as described in Section 5. Both spectral estimation methods produce the smallest mean and maximum ΔE00 when the training dataset is from the same dataset (T2-T2) with a grand mean ΔE00 of less than 0.8 and an average maximum ΔE00 of less than 3.4 across all combinations of illuminants. When a different textile dataset is used as training data i.e. (T2-T1), the mean and maximum ΔE00 are still better than other CATs, MATs and Burns-R. The grand mean ΔE00 is less than 1.2 and average of the maximum ΔE00 values is less than 4.3 across all combinations of illuminants. When the training dataset has some difference in material components such as (T2-F1) which has an optically brightened substrate, such as premium coated paper rather than textile, the mean and maximum ΔE00 values are still significantly better than other CATs, MATs and Burns-R, with results that are comparable to those obtained from (T2-T1). In this case, the grand mean ΔE00 is less than 1.3 and the average maximum value of ΔE00 being less than 4.3 across all combination of illuminants. The training datasets T1 and F1 have similar spectral peaks in the blue region as in the T2 reflectances which makes them a better choice for training compared to M1 that has reflectances with smoother and broader peaks. Therefore, when the training dataset used is M1 i.e. for (T2-M1), the mean and maximum ΔE00 are worse than the results from datasets with fluorescence that has close reflectance characteristics. (T2-M1) results are similar to the other results from CATs and MATs and especially comparable to the results obtained from Burns-R. This may be due to a generalisation achieved with smoothness and higher variability present in M1 reflectances.
Table XX.
Mean ΔE00 between reference tristimulus values and chromatically adapted/estimated tristimulus values for dataset T2 (textile). Source illuminants are in the first row and destination illuminants are in the second row.
D50D65AF11LED-V1
D65AF11D50AF11D50D65F11D50D65AD50D65AAvg
T2–T2CAT020.842.322.910.833.113.312.323.112.833.013.462.762.583.161.672.55
CAT160.982.842.990.983.783.532.823.762.863.123.772.822.983.671.682.84
L-Bradford0.802.032.960.792.753.372.152.942.513.073.552.422.172.681.662.39
Oleari-MAT0.611.823.380.602.183.611.822.212.693.403.612.852.40
Waypoint-MAT0.741.662.150.722.312.671.822.611.872.122.621.771.92
Burns-CAT0.862.303.120.853.093.702.423.322.503.213.922.362.643.211.632.61
Burns-R0.721.462.140.682.042.571.672.471.522.412.961.581.671.931.661.83
wPInverse0.240.580.890.240.801.100.640.900.500.951.150.530.640.850.280.69
Polynomial 30.250.591.020.240.811.200.680.950.661.161.380.700.630.880.160.75
T2–M1wPInverse0.581.301.730.561.812.121.422.031.441.772.151.441.591.921.371.55
Polynomial 30.621.471.870.612.032.241.602.251.691.922.301.661.662.021.401.69
T2–F1wPInverse0.410.941.320.411.311.581.031.441.071.391.651.051.131.380.761.12
Polynomial 30.451.121.560.451.521.871.211.641.061.631.931.051.371.630.761.28
T2–T1wPInverse0.300.741.430.290.991.620.851.181.161.511.71.21.281.540.771.10
Polynomial 30.300.771.390.291.021.580.911.251.091.561.791.131.111.420.531.08
Table XXI.
Maximum ΔE00 between reference tristimulus values and chromatically adapted/estimated tristimulus values for dataset T2 (textile). Source illuminants are in the first row and destination illuminants are in the second row.
D50D65AF11LED-V1
D65AF11D50AF11D50D65F11D50D65AD50D65AAvg
T2–T1CAT022.665.509.812.597.5710.116.379.218.4710.4311.377.565.897.795.397.38
CAT162.596.9210.112.509.1810.786.499.058.0611.0212.477.267.749.705.387.95
L-Bradford2.615.059.932.547.2610.346.059.158.0610.5611.737.294.937.315.397.21
Oleari-MAT2.105.908.892.037.9910.615.878.035.688.7610.416.256.88
Waypoint-MAT2.585.738.652.547.8610.296.699.7710.478.1510.049.417.68
Burns-CAT2.575.7810.582.487.6511.566.209.017.6211.5113.426.416.788.545.397.70
Burns-R2.455.196.722.407.208.855.908.605.638.4310.335.504.926.425.336.26
wPInverse1.292.553.921.353.855.243.14.681.883.875.242.163.174.841.023.21
Polynomial 31.082.414.171.053.465.282.994.252.514.395.292.362.84.110.753.13
T2–M1wPInverse2.545.577.262.497.618.366.479.167.247.188.776.284.747.114.846.37
Polynomial 32.766.226.232.718.538.727.069.777.166.218.335.655.217.784.716.47
T2-F1wPInverse1.692.955.111.684.266.123.435.362.725.46.723.183.643.874.084.01
Polynomial 31.563.654.921.464.745.894.25.412.524.895.782.624.715.994.084.16
T2-T1wPInverse1.463.075.251.414.236.633.995.743.605.537.053.594.516.342.054.30
Polynomial 31.513.135.311.384.196.604.115.793.025.977.523.504.125.891.864.26
Table XXII summarises the mean, median, 95th percentile and maximum of ΔE00 obtained across all datasets and combination of source and destination illuminants for the two spectral estimation methods wPInverse and Polynomial 3. The grand mean ΔE00 is below 1 and average value of the maximum ΔE00 is below 4. This indicates that overall metameric mismatch will be even lower if the results from only those cases where training data is similar to the test data are considered.
Table XXII.
Mean, median, 95th percentile and maximum of ΔE00 between reference tristimulus values and estimated tristimulus values obtained across all datasets and all 12 pairs of illuminants.
MethodMeanMedian95 PercentileMaximum
wPInverse0.850.712.043.62
Polynomial 30.890.722.243.83
A MAT is expected to perform well when there is a good degree of material similarity of training reflectances with test reflectances. Based on this, five categories of MATs are possible, they are, (1) spectral estimation based MAT to predict similar reflectance characteristics, (2) spectral estimation based MAT to predict less similar reflectance characteristics, (3) Matrix based MAT optimised to predict similar reflectance characteristics, (4) Matrix based MAT optimised to predict less similar reflectance characteristics and (5) Matrix based CAT optimised to predict corresponding colours. wPInverse and Plynomial 3 under category 1 or 2 based on the material similarity of the training data used. Burns-R falls under category 2. Oleari-MAT and Waypoint-MAT fall under category 3 or 4 based on how they are optimised. Burns-CAT falls under category 4. CAT02, CAT16 and L-Bradford fall under category 5.
8.
Prediction of Corresponding Colour Data
Table XXIII shows mean colour differences (in ΔE94 for comparison with previous work) obtained for different corresponding colour datasets (CSAJ, Helson, Lam & Rigg and LUTCHI) using different CATs and MATs. The mean ΔE94 is also obtained for the spectral estimation methods wPInverse and Polynomial 3 where the reflectances were first estimated from the source corresponding colour using three different training datasets M1, N2 and T2, respectively. The datasets selected for this analysis and their source and destination illuminants are CSAJ (D65, A), Helson (C, A), Lam & Rigg (D65, A) and LUTCHI (D65, D50). A small inaccuracy arises from the use of standard illuminants in the spectral estimation, while the measured illuminants in the corresponding colour datasets are slightly different from the standard illuminants. The spectral estimation methods using all the different training datasets perform similar or better than Oleari-MAT and Wpt-MAT when it comes to matching corresponding colour data. All CATs perform better in this case including Burns-CAT. Similar, tests were also carried out by Derhak et al. using Wpt-based spectral reconstruction CAT which also utilises Burns spectral estimation and lightness scaling of the final tristimulus value to the source tristimulus value [42] and the results obtained were comparable to the results obtained in this work. This kind of CAT, using characterisation reflectance for print results will fall in category 1 MAT.
Table XXIII.
Overall ΔE94 between destination tristimulus values of corresponding colours and adapted/estimated tristimulus values under varying destination lights. Spectral estimation methods estimated reflectances of corresponding colours using M1, N2 and T2 as training data respectively.
MethodCSAJHelsonLam & RiggLutchiAverage
Cat023.673.602.983.413.42
Cat163.944.133.473.303.71
L-Bradford3.713.472.843.433.36
Burns-CAT3.814.193.153.683.71
Oleari-MAT4.634.723.894.274.38
Wpt-MAT4.284.023.834.534.17
wPInverse(M1)4.143.763.484.513.97
Polynomial 3 (M1)4.223.773.514.463.99
wPInverse (N2)4.713.653.424.294.02
Polynomial 3 (N2)4.393.763.474.343.99
wPInverse (T2)4.693.653.464.364.04
Polynomial 3 (T2)4.353.573.314.373.90
9.
A Spectral Estimation Workflow for Colour Management
In the previous sections, we have established that spectral estimation methods wPInverse and Polynomial 3 perform very well in predicting colour under changing illuminants when training datasets are carefully chosen by matching the material characteristics. However, weight matrix W in the case of wPInverse has to be calculated for each pixel value which is both computationally time consuming and complicated to implement inside an ICC profile. Therefore, due to the ease of implementation and low computation time, Polynomial 3 spectral estimation method using matrices and stack-based calculator element programming of ICC.2, we propose a spectral estimation workflow to integrate it into colour management. We define an absolute rendering transform from colorimetric PCS XYZ values to the corresponding reflectances in a BToD3Tag. BToD3Tag defines a colour transform from a spectrally-based PCS, determined by the spectralPCS and PCS fields in the header, to device colours [72]. DataColourSpace tag in the header is used to define the number of wavelengths. A multiProcessElementType is defined in the BToD3Tag which incorporates a CalculatorElement tag. The multiProcessElementType defines and processes a sequence of processing elements and they are processed in the order of their position. Each processing element passes its results to the next element, with the final processing element providing the final result for the multiProcessElementType. Basis matrix M from Eq. (4) is stored as a matrix using matrixElement tag in an external file which is imported when the ICC XML file is compiled to create the ICC profile. A macro is defined to expand XYZ input to the third order polynomial terms in Table I. The spectral estimation model is encoded in the main function; the input channels are expanded using the polynomial expansion macro and then a matrix multiplication mtx is called to multiply matrix M by the polynomial terms of XYZ. The code inside the main function under CalculatorElement tag is shown in Figure 6. DToB3Tag defines a colour transform from device to a spectrally-based PCS to achieve an absolute rendering. Therefore, a transform from reflectance values to XYZ using illuminant and observer functions can be defined in DToB3Tag, if colorimetric output is required. The average roundtrip error in ΔE00 between the starting colorimetry and the output XYZ values after applying the spectral estimation and conversion back to colorimetry across datasets M1, N2 and T2 obtained from applying the profile back to the estimated reflectance values is 0.026 at 32-bit precision using the ICC.2 workflow, and 4.15E-10. when computed at double precision in MATLAB.
Figure 6.
Polynomial 3 spectral estimation using Calculator element programming.
10.
Conclusion
This study presents the estimated reflectances using different spectral estimation methods and training datasets with known material components or printing conditions. It can be seen from the results that if the training dataset used has similar material components to those of the test dataset, the estimated reflectances have reduced metameric mismatch under different illuminants. It is also observed that when training reflectances have some dissimilarity (such as using training reflectances of web-offset on lightweight coated paper for reflectance estimation of newsprint data), the results still lead to acceptable metameric differences because their reflectances are smooth and have similar peak wavelengths with some difference in amplitudes. Similar acceptable metameric differences are also obtained when training reflectances of offset-litho on premium coated paper are used to estimate reflectances of textile dataset where both exhibit fluorescence and, therefore, have reflectance curves with similar sharp peaks around the blue region. This suggests that based on similarities in material components, many spectral datasets or printing conditions can be grouped together and a single reflectance dataset can represent a common training dataset for reflectance estimation of that group. The weighted pseudo inverse (wPInverse) and third order polynomial (Polynomial 3) spectral estimation methods performed the best in all the tests, while Polynomial 2 and wPCA are not significantly different. These methods are also not computationally costly or complex to implement.
This study also compares the sensor adjustment performance of spectral estimation with chromatic adaptation transforms (CATs) and material adjustment transforms (MATs). The two spectral estimation methods, wPInverse and Polynomial 3 with good training data selection performed significantly better in minimizing the metameric mismatch compared to other CATs and MATs. When the training dataset is not a good material match or has high variability such as using Munsell colour chip reflectances to estimate reflectances for newsprint or textile datasets then the metameric mismatch increases although the performance is similar to other CATs. This paper has shown the importance of the training dataset, and that by considering spectral similarities between datasets, simple spectral estimation methods like weighted pseudo inverse or third order polynomial can be used as a sensor adjustment transform that minimizes metameric mismatches.
We show that even with training datasets with different material components match to the test data i.e. when they do not belong to the same dataset but have some spectral similarities, spectral estimation produces reflectances with least metameric mismatch and such methods can be a good alternative to using traditional CATs and MATs for predicting tristimulus values under different illuminants, when material constancy is the aim. When spectral data is not required, a more general MAT i.e. a 3 × 3 transform can be used. Also based on the application, such specific training data can also be chosen to optimise an existing MAT and improve material constancy.
Simple spectral estimation methods were evaluated such that they can be easily integrated into a colour management workflow. Among the best performing spectral estimation methods, Polynomial 3 can be easily and efficiently encoded in an ICC profile as matrices using calculator elements programming of ICC.2. In the future, regularisation of these methods can be considered. Regularisation require proper selection of the regularisation term to achieve both good generalisation and accuracy and can be considered to assess if it can help achieve spectral estimation preserve material constancy or colour constancy. The final matrices after regularisation can then be encoded into ICC profiles.
This study provides an insight into the need for standardising spectral estimation usage in colour management, where every step is essential. These steps include selecting a reference training dataset that can represent spectral data for a group of print data, determining the spectral estimation method, identifying the application’s goal (e.g. material constancy or colour constancy), and understanding the workflow to encode it in an ICC profile.
Acknowledgment
This project received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No. 814158. The authors would like to thank Max Derhak, Chris Bai, William Li, Danny Rich and Markus Barbieri, who provided data and comments at an early stage of this work.
References
1LinC. R.XuJ. F.XuJ. L.2012Prediction algorithm of spectral reflectance of spot color ink based on color parallel and superposition modelAdv. Mater. Res.430117611821176–8210.4028/www.scientific.net/AMR.430-432.1176
2TzengD.-Y.“Spectral-based color separation algorithm development for multiple-ink color reproduction,” Unpublished doctoral dissertation (Rochester Institute of Technology, Rochester, 1999)
3GerhardtJ.HardebergJ. Y.2008Spectral color reproduction minimizing spectral and perceptual color differencesColor Res. Appl.33494504494–504
4WangB.XuH.LuoM. R.GuoJ.2011Spectral-based color separation method for a multi-ink printerChinese Opt. Lett.906330110.3788/COL201109.063301
5SunB.LiuH.ZhouS.2014Spectral separation for multispectral image reproduction based on constrained optimization methodJ. Spectroscopy201410.1155/2014/345193
6UrbanP.RosenM. R.BernsR. S.Spectral gamut mapping framework based on human color visionProc. IS&T/SID CIC16: Sixteenth Color Imaging Conf.2008Vol. 2008IS&TSpringfield, VA548553548–53
7HallR.GreenbergD.1983A testbed for realistic image synthesisIEEE Comput. Graph. Appl.3102010–2010.1109/MCG.1983.263292
8WilkieK. D. A. C. A.PurgathoferW.Tone reproduction and physically based spectral renderingEurographics2002WileyHoboken, NJ
9MengJ.SimonF.HanikaJ.DachsbacherC.2015Physically meaningful rendering using tristimulus coloursComput. Graph. Forum34314031–40
10EliavD.RothS.ChorinM. B.2007Multi-primary spectral display for soft proofingJ. Imaging Sci. Technol.51492501492–50110.2352/J.ImagingSci.Technol.(2007)51:6(492)
11Van SongT. P.AndraudC.Ortiz-SegoviaM. V.Towards spectral prediction of 2.5 d prints for soft-proofing applications2016 Sixth Int’l. Conf. on Image Processing Theory, Tools and Applications (IPTA)2016IEEEPiscataway, NJ161–610.1109/IPTA.2016.7820957
12BabaeiV.AmirshahiS. H.AgahianF.2011Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normalityColor Res. Appl.36295305295–30510.1002/col.20613
13WuG.ShenX.LiuZ.YangS.ZhuM.2016Reflectance spectra recovery from tristimulus values by extraction of color feature matchOpt. Quantum Electronics481131–1310.1007/s11082-015-0274-3
14SchettiniR.BaroloB.1996Estimating reflectance functions from tristimulus valuesAppl. Signal Process.3104115104–15
15FairmanH. S.BrillM. H.2004The principal components of reflectancesColor Res. Appl.29104110104–1010.1002/col.10230
16AyalaF.EchávarriJ. F.RenetP.NegueruelaA. I.2006Use of three tristimulus values from surface reflectance spectra to calculate the principal components for reconstructing these spectra by using only three eigenvectorsJ. Opt. Soc. Am. A23202020262020–610.1364/JOSAA.23.002020
17AgahianF.AmirshahiS. A.AmirshahiS. H.2008Reconstruction of reflectance spectra using weighted principal component analysisColor Res. Appl.33360371360–7110.1002/col.20431
18ZhaoY.BernsR. S.2007Image-based spectral reflectance reconstruction using the matrix R methodColor Res. Appl.32343351343–5110.1002/col.20341
19StigellP.MiyataK.Hauta-KasariM.2007Wiener estimation method in estimating of spectral reflectance from RGB imagesPattern Recognition and Image Analysis17233242233–4210.1134/S1054661807020101
20AmirshahiS. H.AmirhahiS. A.2010Adaptive non-negative bases for reconstruction of spectral data from colorimetric informationOptical Review17562569562–910.1007/s10043-010-0101-9
21ZuffiS.SantiniS.SchettiniR.Correcting for not feasible values in reflectance function estimation using linear modelsAIC Colour 05 – 10th Congress of the Int’l. Colour Association2005Vol. 5162916321629–32
22López-ÁlvarezM. A.Hernández-AndrésJ.ValeroE. M.RomeroJ.2007Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylightJ. Opt. Soc. Am. A24942956942–5610.1364/JOSAA.24.000942
23DupontD.2002Study of the reconstruction of reflectance curves based on tristimulus values: comparison of methods of optimizationColor Res. Appl.27889988–9910.1002/col.10031; Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur
24ZuffiS.SchettiniR.2003Reflectance function estimation from tristimulus valuesProc. SPIE5293222231222–3110.1117/12.527723
25BiancoS.2010Reflectance spectra recovery from tristimulus values by adaptive estimation with metameric shape correctionJ. Opt. Soc. Am. A27186818771868–7710.1364/JOSAA.27.001868
26HajipourA.Shams-NateriA.2017Effect of classification by competitive neural network on reconstruction of reflectance spectra using principal component analysisColor Res. Appl.42182188182–810.1002/col.22050
27AbedF. M.AmirshahiS. H.AbedM. R. M.2009Reconstruction of reflectance data using an interpolation techniqueJ. Opt. Soc. Am. A26613624613–2410.1364/JOSAA.26.000613
28ChenE.ChouT.-R.Spectral reflectance recovery of various materials based on linear interpolation with nonparametric metameric spectra of extreme pointsProc. 3rd Int’l. Conf. on Cryptography, Security and Privacy2019ACMNew York, NY247255247–5510.1145/3309074.3309118
29KimB. G.HanJ.-w.ParkS.-b.2012Spectral reflectivity recovery from the tristimulus values using a hybrid methodJ. Opt. Soc. Am. A29261226212612–2110.1364/JOSAA.29.002612
30BernsR. S.JrF. W. BillmeyerSacherR. S.1985Methods for generating spectral reflectance functions leading to color-constant propertiesColor Res. Appl.10738373–8310.1002/col.5080100205
31van TrigtC.1990Smoothest reflectance functions. I. Definition and main resultsJ. Opt. Soc. Am. A7189119041891–90410.1364/JOSAA.7.001891
32van TrigtC.1990Smoothest reflectance functions. II. Complete resultsJ. Opt. Soc. Am. A7220822222208–2210.1364/JOSAA.7.002208
33LuoM. R.RhodesP. A.1999Corresponding-colour datasetsColor Res. Appl.24295296295–610.1002/(SICI)1520-6378(199908)24:4<295::AID-COL10>3.0.CO;2-K
34LuoM.HuntR.1998A chromatic adaptation transform and a colour inconstancy indexColor Res. Appl.23154158154–810.1002/(SICI)1520-6378(199806)23:3<154::AID-COL7>3.0.CO;2-P; Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur
35SusstrunkS. E.HolmJ. M.FinlaysonG. D.2000Chromatic adaptation performance of different RGB sensorsProc. SPIE4300172183172–8310.1117/12.410788
36LiC.XuY.WangZ.LuoM. R.CuiG.MelgosaM.BrillM. H.PointerM.2018Comparing two-step and one-step chromatic adaptation transforms using the CAT16 modelColor Res. Appl.43633642633–4210.1002/col.22226
37MacAdamD. L.1963Chromatic adaptation. II. Nonlinear hypothesisJ. Opt. Soc. Am.53144114451441–510.1364/JOSA.53.001441
38FairchildM. D.Von Kries 2020: Evolution of degree of chromatic adaptationProc. IS&T CIC28: Twenty-Eighth Color and Imaging Conf.2020Vol. 2020IS&TSpringfield, VA252257252–7
39ISO 15076-1:2010“Image technology colour management - Architecture, profile format, and data structure,” ISO, Geneva (2010)
40MoroneyN.FairchildM.HuntR.LiC.The CIECAM02 color appearance modelProc. IS&T/SID CIC10: Tenth Color Imaging Conf.2002IS&TSpringfield, VA232723–7
41LiC.LiZ.WangZ.XuY.LuoM. R.CuiG.MelgosaM.BrillM. H.PointerM.2017Comprehensive color solutions: CAM16, CAT16, and CAM16-UCSColor Res. Appl.42703718703–1810.1002/col.22131
42DerhakM. W.LuoE. L.GreenP. J.Fast chromatic adaptation transform utilizing Wpt (Waypoint) based spectral reconstructionLondon Imaging Meeting20202020IS&TSpringfield, VA495349–53
43DerhakM. W.BernsR. S.2015Introducing Wpt (Waypoint): A color equivalency representation for defining a material adjustment transformColor Res. Appl.40535549535–4910.1002/col.21928
44LogvinenkoA. D.FuntB.GodauC.2013Metamer mismatchingIEEE Trans. Image Process.23344334–4310.1109/TIP.2013.2283148
45ZhangX.FuntB.MirzaeiH.2016Metamer mismatching in practice versus theoryJ. Opt. Soc. Am. A33A238A247A238–4710.1364/JOSAA.33.00A238
46LogvinenkoA. D.2009An object-color spaceJ. Vision9555–10.1167/9.11.5
47OleariC.MelgosaM.HuertasR.2011Generalization of color-difference formulas for any illuminant and any observer by assuming perfect color constancy in a color-vision model based on the OSA-UCS systemJ. Opt. Soc. Am. A28222622342226–3410.1364/JOSAA.28.002226
48BurnsS. A.2019Chromatic adaptation transform by spectral reconstructionColor Res. Appl.44682693682–9310.1002/col.22384
49El GhaouiL.LebretH.1997Robust solutions to least-squares problems with uncertain dataSIAM J. Matrix Anal. Appl.18103510641035–6410.1137/S0895479896298130
50KangH. R.Computational Color Technology2006SPIE PressBellingham
51ChauW.-K. W.Colour reproduction for reflective images,” Ph.D. thesis (1999)
52DerhakM. W.Spectrally Based Material Color Equivalency: Modeling and Manipulation2015Rochester Institute of Technology
53FairchildM. D.Color Appearance Models2013John Wiley & Sons
54GreenP.HabibT.Chromatic adaptation in colour managementInt’l. Workshop on Computational Color Imaging2019Vol 11418SpringerCham134144134–4410.1007/978-3-030-13940-7_11
55MoroneyN.FairchildM.HuntR.LiC.“A Colour Appearance Model for Color Management Systems: CIECAM02,” Technical Report, Commission Internationale de l’Elcairage (2004)
56TechnologyR. I.“(n.d.). (Wpt) Waypoint Normalization,” https://www.rit.edu/science/sites/rit.edu.science/files/2019-03/MCSL_Waypoint.pdf. (Accessed: 2023-01-20)
57Eastern FinlandU.(n.d.) “Munsell colors glossy (all) (Spectrofotometer measured),” https://sites.uef.fi/spectral/munsell-colors-glossy-all-spectrofotometer-measured/ (Accessed: 2023 -01-20)
58FOGRA51 spectral dtaset(n.d.). https://www.color.org/chardata/fogra51.xalter (Accessed: 2023-01-20)
59NussbaumP.HardebergJ. Y.2012Print quality evaluation and applied colour management in coldset offset newspaper printColor Res. Appl.37829182–9110.1002/col.20674
60CIE 15: 2018Colorimetry (CIE Central Bureau, Vienna, 2018)
61WangZ.ZhaoB.LiJ.LuoM. R.PointerM. R.MelgosaM.LiC.2017Interpolation, Extrapolation, and Truncation in Computations of CIE Tristimulus ValuesColor Res. Appl.42101810–810.1002/col.22016
62YangH.ZhangJ.YangZ.ZhouJ.XieW.CuiS.2021Methods for improving the accuracy of CIE tristimulus values of object color by calculation Part II: Improvement on measurement wavelength rangesJ. Engineered Fibers and Fabrics161558925020985964
63CIE Publ. 167:2005, “Recommended practice for tabulating spectral data for use in colour computations,” Technical Report, Vienna: CIE Central Bureau (2005)
64LiC.CIETC1-71 perspective: An overview on accurately computing tristimulus valuesProc. IS&T/SID CIC16: Sixteenth Color Imaging Conf.2008Vol. 16IS&TSpringfield, VA222722–710.2352/CIC.2008.16.1.art00005
65LiC.LuoM. R.RiggB.2004A new method for computing optimum weights for calculating CIE tristimulus valuesColor Res. Appl.299110391–10310.1002/col.10229; Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur
66ISO 3664:20009 “Graphic technology and photography—Viewing conditions,” (ISO, Geneva, 2009)
67LuoM.RiggB.SmithK.2003CMC colour inconstancy index; CMCCON02Coloration Technology119280285280–510.1111/j.1478-4408.2003.tb00184.x
68SchandaJ.LuoM. R.CIE 1931 and 1964 standard colorimetric observers: History, data, and recent assessmentsEncyclopedia of Color Science and Technology2016SpringerCham125129125–9
69Hernández-AndrésJ.RomeroJ.LeeR. L.2001Colorimetric and spectroradiometric characteristics of narrow-field-of-view clear skylight in Granada, SpainJ. Opt. Soc. Am. A18412420412–2010.1364/JOSAA.18.000412
70CIE 15: Colorimetry, CIE Central Bureau, Vienna (2004)
71SharmaG.Rodríguez-PardoC. E.2021Geometry of multiprimary display colors I: Gamut and color controlIEEE Access9965739659796573–9710.1109/ACCESS.2021.3093395
72ICC “Specification ICC.2:2019 Image technology colour management — Extensions to architecture, profile format and data structure,” Int’l. Color Consortium (2019)