Back to articles
Papers Presented at Archiving 2023
Volume: 67 | Article ID: 030403
Image
An Experiment-based Comparative Analysis of Pigment Classification Algorithms using Hyperspectral Imaging
  DOI :  10.2352/J.ImagingSci.Technol.2023.67.3.030403  Published OnlineMay 2023
Abstract
Abstract

Hyperspectral imaging techniques are widely used in cultural heritage for documentation and material analysis. Pigment classification of an artwork is an essential task. Several algorithms have been used for hyperspectral data classification, and the effectiveness of each algorithm depends on the application domain. However, very few have been applied for pigment classification tasks in the cultural heritage domain. Most of these algorithms work effectively for spectral shape differences and might not perform well for spectra with differences in magnitude or for spectra that are nearly similar in shape but might belong to two different pigments. In this work, we evaluate the performance of different supervised-based algorithms and few machine learning models for the pigment classification of a mockup using hyperspectral imaging. The result obtained shows the importance of choosing appropriate algorithms for pigment classification.

Subject Areas :
Views 130
Downloads 37
 articleview.views 130
 articleview.downloads 37
  Cite this article 

Dipendra J. Mandal, Marius Pedersen, Sony George, Hilda Deborah, Clotilde Boust, "An Experiment-based Comparative Analysis of Pigment Classification Algorithms using Hyperspectral Imagingin Journal of Imaging Science and Technology,  2023,  pp 030403-1 - 030403-18,  https://doi.org/10.2352/J.ImagingSci.Technol.2023.67.3.030403

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2023
 Open access
  Article timeline 
  • received January 2023
  • accepted April 2023
  • PublishedMay 2023
jist
JIMTE6
Journal of Imaging Science and Technology
J. Imaging Sci. Technol.
J. Imaging Sci. Technol.
1062-3701
1943-3522
Society for Imaging Science and Technology
1.
Introduction
Hyperspectral Imaging (HSI) technology, initially developed and used for remote sensing applications, is also being used more frequently in the Cultural Heritage (CH) domain for analyzing artwork and has provided great potential in its scientific analysis. In CH, proper pigment classification of artwork materials such as paintings is of essential importance for conservators to precisely analyze an object and its historical value. Generally, reflection, transmission, and absorption of electromagnetic energy by a given material produce a unique spectrum at a given wavelength. The shape of the spectrum is distinctive because every material has a different chemical composition and an inherent physical structure [1]. For pigment classification using HSI, supervised classification algorithms are mainly used; they compare the spectrum within a region of interest with spectral library spectra with a specific tolerance [2, 3].
Many supervised-based classification algorithms exist for HSI, mostly in remote sensing applications, for example, mineral identification [4, 5]. However, few of these algorithms are being adopted directly or with some modification in other application domains such as medical imaging [6, 7], food and agriculture [810], forensics [11]. Moreover, to the best of our knowledge, only a few have been implemented in the CH domain, especially for pigment classification of artwork such as paintings. HSI acquisition for CH are usually performed under controlled laboratory conditions, where the distance between the camera and the object is relatively small and one has control over illumination types and geometry. In contrast, for remote sensing, HSI data are collected using natural illumination with a more considerable distance between the camera and target, causing temporal illumination variations and atmospheric effects. Due to these differences between two application domains, various classification algorithms adopted in remote sensing cannot be directly adapted or might not work effectively for CH applications. For example, an algorithm insensitive to intensity variation can perform well in remote sensing. However, it might not perform with the same accuracy for CH objects because magnitude measures are essential in CH. Faded or aged pigments [12], pure pigments mixed with different binding mediums [13], mixed pigments (e.g., pigments mixed in different weight percentages of lead white [14]), etc. can have variations in magnitude, which is essential to determine for both diagnostic and conservative purposes. Very few of these algorithms have been used for pigment identification of artwork using HSI, and therefore it is necessary to explore and evaluate them. Furthermore, many materials associated with CH lack pure end members, particularly when they undergo weathering [15], aging [1618], or restoration processes over time [19]. Therefore, accurately determining the composition of a specific material or differentiating it from other materials within an image can pose challenges, making the task of identifying and mapping materials in HSI more challenging.
Deep learning has recently provided new possibilities by solving more complex questions in many applications [20, 21]. In CH, spectra of the pigments get affected with different types of medium used as binders; spectra might look identical, i.e. might have a small shift in peak or small change in magnitude [13], and under such conditions, most of the supervised algorithms do not perform well for classification. However, distinguishing such conditions might be important for art historians and conservators to select the proper conservation methods. Also, in the case of fading, there might only be a minor change in the magnitude of a spectrum. In medical imaging, Zhi et al. [22] used a Support Vector Machine (SVM) for tongue diagnosis using HSI, where spectra obtained from the surface of the tongue under different conditions have changed mainly in magnitude. Devassy et al. [9] used a One-dimensional Convolutional Neural Network (1D-CNN) to classify strawberries and found that the result was better than supervised algorithms. To the best of our knowledge, using deep learning-based models for pigment classification of artwork is not a common practice and, therefore, it will be worthwhile to explore their potential.
This paper presents the comparative experimental analysis of various supervised algorithms and machine learning models for pigment classification on a mockup using HSI in the Visible Near-Infrared (VNIR) region. The algorithms used are the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), Spectral Information Divergence (SID), Spectral Similarity Scale (SSS), and the hybrid combinations of SID–SAM and SID–SCM. We also used the Jeffries-Matusita (JM) distance function combined with SAM (JM-SAM). Likewise, few of the machine learning models used are SVM, Fully Connected Neural Network (FC-NN), and 1D-CNN. The rest of this paper is structured as follows. Section 2 provides an overview of data processing techniques and algorithms, followed by details about the algorithms used in Section 3. Object details, imaging technology, and the experimental framework used are given in Section 4. Section 5 covers the results with a discussion. Finally, Section 6 presents our conclusions, followed by future work.
2.
Overview of Algorithms and Processing Techniques
Generally, a spectral matching technique is employed for pigment classification, i.e., finding a spectral similarity between two spectra at any given pixel in an image. The best fit indicates the most significant possibility of being reference material for a given pixel. The distinction between different algorithms used for classification is their ability to consider shapes and magnitude differences between two spectra. This section provides an overview of the classification algorithms employed in various application domains with HSI.
Shivakumar et al. [23] compared the performance of SAM and SCM for classifying nine different classes in remote sensing applications using HSI. There was spectral overlapping between the datasets for some of the classes, and they identified that SCM was more efficient compared to SAM for the classes with a highly similar spectrum. Similarly, SCM was compared with SAM for mineral analysis [24] and it was found that SCM algorithm delivered better results due to its wide variation of data from − 1 to 1. Qin et al. [25] used SID methods to identify lesions in citrus using HSI. Devassy et al. [26] explored the performance of five different algorithms, namely SAM, SCM, ED, SID, and Binary Encoding (BE), for the task of ink classification using HSI. The overall accuracy (average of all inks used) for the SAM algorithm was high compared to all other methods used. None of the methods worked effectively to classify between inks that had nearly similar spectral signatures with only change in magnitude.
For a given two vectors (spectra), Change Vector Analysis (CVA) computes the change in spectral vectors and compares their magnitude with the specified threshold value [27]. It was originally designed for only two spectral dimensions (2 spectral bands), however, using the directional cosine approach, it can be extended to a N-dimensional space [28] and is computed using Eq. (1).
(1)
αi=cos1tirii=1nb(tiri)2,
where ti and ri are the tests and reference image, and nb is the total number of bands with i = 1,2, …, nb. In this method, we will obtain the number of angles αi equal to the number of bands, which makes the computation complex, details on this explanation and its drawbacks are explained in Ref. [29]. Osmar et al. [29] in their study of change detection methods in a tropical environment using HSI, proposed a new approach to calculate the spectral direction of change using the SAM and the SCM method, and for magnitude, they computed the Mahalanobis distance and the Euclidean distance. The best result was obtained using SAM for similarity and ED for magnitude.
Many hybrid approaches to compute the classification of HSI data have shown improved results in many applications. Using a hybrid approach of SAM and SID was found to produce better results than using them alone [30]. Naresh et al. [31] computed the hybrid of SCM and SID (SID-SCM) for the classification of vigna species and compared their result with the hybrid method of SAM and SID. They performed an experiment for various spectral regions and found that for region 400–700 nm, results are better. Zhang et al. [32] used the hybrid approach by combining Minimum Noise Fraction (MNF) and SAM methods to identify defective tomatoes.
Li et al. [33] proposed a new method called Extended Spectral Angle Mapper (ESAM) for detecting disease in citrus plants for multi- and hyperspectral datasets. The result was compared with supervised methods, Mahalanobis distance, and unsupervised method; k-means and ESAM were found to have better accuracy (86%) than the other two methods (around 64%). Jeffries-Matusita (JM) [34] are mainly used for the separability criterion and optimal band selection, so only the most distinct bands are selected for the data classification task [35, 36]. The JM method is a pairwise distance measure that can be applied mostly to two class cases. Authors have proposed many extensions of JM [37] for use in multiclass classification. The most common is to take the average JM distance computed for all pairs of classes. Deborah et al. [38] evaluated the performance of four different distance functions named Root Mean Square Error (RMSE), Goodness-of-Fit Coefficient (GFC) [39], Jeffrey divergence, and Levenshtein distance on both synthetic and real hyperspectral datasets to find a suitable distance measure for spectral image processing. They found that for the magnitude change, only RMSE followed by Jeffrey divergence performed in the desired way.
Deborah et al. [40] compared different distance functions for pigment classification tasks on HSI datasets with the presence of spectral noise and variations. Intending to identify the appropriate methods based on suitable selection criteria, they found the Euclidean distance of a Cumulative Spectrum (ECS) to be the most suitable distance function for spectral data. However, in their study, evaluation of these distance functions on artificially simulated spectra and some real spectra from pigment patches from the Kremer pigment chart [41], these charts are screen printed and usually the pigments are in a water-based binder, which might not be the exact representation of the real spectra obtained from an artwork. Bhattacharyya Distance (BD) measures the separability between two classes and has been used in remote sensing applications frequently [42, 43]. BD was used to select the number of bands required for efficient classification, and then SAM and SVM were used for identification of stress symptoms in plants [44].
In recent years, machine learning-based classification methods have been popular and extensively used in many different applications. SVM is one of the machine learning approaches used for classification tasks and has shown efficient results, especially when the training data size is relatively small [5, 45]. Deep learning-based CNN models can learn spectral features more effectively using deeper layers and in many cases, such methods can give us higher classification accuracy than traditional algorithms. Pouyet et al. [46] used the Deep Neural Network (DNN) and compared the result with SAM for pigment identification and mapping using HSI in the SWIR region and found that the DNN model produced better results than SAM. Devassy et al. [9], in their study of strawberry classification based on sugar content, found that algorithms SID and SAM, which rely on the spectrum’s geometry, did not perform well, as the two reference spectrum were nearly identical in shape and a small difference in the magnitude of the NIR region of the spectrum. They also showed that 1D-CNN based classification gives better accuracy (96%) compared to SAM (60%) and SID (58%). Table I summarizes the list of algorithms used for HSI data processing, their area of study, and details of the classification/network parameters.
Table I.
Summary of algorithms used for HSI datasets with its applications and model parameters; Th: threshold value, BS: batch size, LR: learning rate, DR: dropout rate, ReLu: rectified linear unit, HL: hidden layer, CL: convolutional layer, FCL: fully connected layer, KS: kernel size.
AlgorithmsApplicationWavelengthParameters
EDInk classification [26]400–1000 nm
SAMPigment classification [49]400–1000 nmTh:0.1
Mineral classification [50]380–2500 nm
Ink classification [26]400–1000 nm
Minerals and land classification [51]
SCMInk classification [26]400–1000 nm
Pigment identification [3]370–1100 nm
Pigment mapping [52]400–2500 nmTh:0.1
SIDMineral classification [50]380–2500 nm
Ink classification [26]400–1000 nm
Minerals and land classification [51]
Crops classification [53]200–2400 nm
SSSCrops classification [54]
SIDSAMCrop classification [30]400–2500 nm
Mineral classification [50]380–2500 nm
Dye and pigment based Inkjet prints [55]400–1000 nm
SIDSCMPlant classification [31]350–2500 nm
Mineral classification [50]380–2500 nm
JMSAMLandcover classification [56]
Mineral classification [50]380–2500 nm
Ink classification [26]400–1000 nm
Dye and pigment based Inkjet prints [55]400–2500 nm
SVMTongue diagnosis [22]400–1000 nm
Crops classification [57]Polynomial Kernel
FC-NNAerial images classification [58]BS:500, LR: 0.05,
DR:0.25, ReLU
Pigment classification [46]1000–2500 nmHL: 4, LR : 0.001,
Adam, ReLU/Sigmoid
1D-CNNSoil texture classification [59]400–1000 nmCL:4, FCL:2,
Softmax
Classification of strawberry [9]380–2500 nmFilters: 8, HL: 2,
BS:32, KS: 3
3.
Classification Algorithms
In this section, we describe the algorithms used in our experiment.
3.1
Euclidean Distance (ED)
Classification can be computed by calculating the minimum distance between the spectrum to be classified and the reference spectrum of the class. For a given n-dimensional image spectrum ti and a reference spectrum ri, the ED between them is defined using Eq. (2), where nb is the number of spectral bands. ED is proportional to the magnitude of the squared subtractive difference vector, but not its shape [47].
(2)
ED =i=1nb(tiri)2
3.2
Spectral Angle Mapper (SAM)
SAM is one of the most popular spectral classification methods used in CH applications due to its easy and rapid approach to mapping spectral similarity. SAM, developed by Boardman [48], measures the spectral similarity between any two spectra (test and reference). Arccosine angles between the two spectra are calculated by treating them as N-dimensional vectors in space, where N is equal to the number of spectral bands. The angle between two spectra is calculated using Eq. (3), where α is the spectral angle in radians, ti is the image spectrum, ri is the reference spectrum, and nb is the total number of bands. A smaller angle indicates a more decisive match between the spectra. Kruse et al. [48] describe a simplified representation of the spectral angle mapper algorithm using a two-dimensional scatter plot for two band image data. Since the SAM algorithm measures an angle between two vectors and the angle does not change with the length of the vectors, i.e., insensitive to the gain. Therefore, this algorithm does not consider magnitude shifts in the spectrum (see Osmar et al. [24]).
(3)
α=cos1i=1nbtirii=1nbti2i=1nbri2
3.3
Spectral Correlation Mapper (SCM)
SCM calculates the Pearson correlation coefficient between two spectra. It standardizes the data, centralizing itself in the mean of the test and reference spectra. By applying arccosine, it can be expressed in angles. This algorithm excludes negative correlation and retains shading effect minimization characteristics similar to SAM, resulting in better classification results [24, 29]. SCM can be computed using Eq. (4), where α is the arccosine of the spectral correlation measure in radians, ti and t̄i are the image spectrum and its sample mean, similarly ri and r̄i are the reference spectrum and its sample mean, and nb is the total number of bands.
(4)
α=cos1i=1nb(tit̄i)(rir̄i)i=1nb(tit̄i)2i=1nb(rir̄i)2.
3.4
Spectral Information Divergence (SID)
SID measures spectral similarity between the spectrum of test and reference data for each pixel based on the concept of divergence, i.e. measuring probabilistic discrepancy between them. The probability distribution of the test and reference spectra is expressed as Eq. (5) and Eq. (6), respectively [60].
(5)
pi=tii=1nbti
(6)
qi=rii=1nbri,
where, ti is the image spectrum, ri is the reference spectrum, and nb is the total number of bands. Using these two probability distributions, SID can be calculated with Eq. (7).
(7)
SID =i=1nbpilogpiqi+i=1nbqilogqipi
3.5
Spectral Similarity Scale (SSS)
SSS evaluates the shape and magnitude difference between two spectra. Granahan et al. [54, 61] used SSS to analyze hyperspectral atmospheric correction techniques. This algorithm uses the Euclidean distance metric for magnitude, and correlation for comparing the shape of the spectra. This method combines the calculations of both, giving each an equal weighting [62]. SSS has a scale ranging from a minimum of zero and maximum of the square root of two; smaller the value, the higher the similarity between the spectrum i.e. if two spectrum are collinear then its SSS value will be equal to zero. SSS can be computed using Eq. (8).
(8)
SSS =(de)2+(r ̂)2
Here, de is the Euclidean distance between two spectra and is computed using Eq. (9) and its value ranges from 0 to 1 due to the factor 1∕nb.
(9)
de=1nbi=1nb(tiri)2
Equation (10) computes the value for r̂, where r is the correlation coefficient between the two spectra and is computed using Eq. (11).
(10)
r̂=(1r2)
(11)
r2=i=1nb(tit̄i)(rir̄i)i=1nb(tit̄i)2i=1nb(rir̄i)22
3.6
SID-SAM
As the name suggests, SIDSAM is computed by multiplying SID by taking the tangent of SAM or with the sine function of SAM, i.e., by computing the perpendicular distance between two vectors (test and reference). Both of these measures produce similar results [30]. This hybrid computation makes two similar spectra even more comparable and two dissimilar spectra more distinctive, thus significantly improving the spectral discriminability. SIDSAM can be computed as either of the Eqs. (12) or (13), where SID and SAM can be computed using Eqs. (7) and (3) respectively.
(12)
SID − SAM=SID tan(SAM)
(13)
SID − SAM=SID sin(SAM)
3.7
SID-SCM
Similar to SID-SAM, we also tested the hybrid combination of SIDSCM, computed by multiplying SID by either taking a tangent of SCM or with the sine function of SCM [31]. SID-SCM can be computed as either of Eqs. (14) or (15), where SID and SCM can be computed using Eqs. (7) and (4) respectively.
(14)
SID − SCM=SID tan(SCM)
(15)
SID − SCM=SID sin(SCM)
3.8
Jeffries-Matusita Spectral Angle Mapper (JM-SAM)
Similarly to SID-SAM, JM-SAM is also a hybrid similarity measure algorithm in which the spectral capabilities of both algorithms are orthogonally projected by using either a tangent or a sine function [56]. A smaller JM-SAM value indicates a strong match between the reference and test spectra. It can be computed using either Eqs. (16) or (17).
(16)
JM − SAM=JMD tan(SAM)
(17)
JM − SAM=JMD sin(SAM)
Here, Jeffries-Matusita distance (JMD) is one of the spectral separability measures commonly used in remote sensing applications and can be computed using Eq. (18), where B is the Bhattacharyya distance and is computed using Eq. (19) and SAM is computed using Eq. (3).
(18)
JMD=2(1eB)
(19)
B=18(μtμr)Tσt+σr21(μtμr)+12ln|σt+σr2||σt||σr|
Here, μt and μr are the mean of the test and reference spectra, respectively; σt and σr are the covariance of the test and reference spectra, respectively.
3.9
Support Vector Machine (SVM)
SVM is a supervised classification algorithm used in machine learning and has been used successfully for HSI classification tasks [6365]. These are usually used to separate two or more data classes using a hyperplane. Objects to be classified are represented as a vector in an n-dimensional space. Then SVM method draws a hyperplane so that all points of one class are on one side of this hyperplane and points of the other class are on the other side. Of course, there could be multiple such hyperplanes. SVM tries to find the one that best separates these classes by computing the maximum distance between the data points of these classes closest to the hyperplane, also called support vectors. This method is similar to the Neural Network, but instead of computing the weight and bases of each point, SVM adjusts these parameters by computing it only on the support vectors and determining the decision boundaries for classification.
3.10
Fully Connected Neural Network (FC-NN)
In the FC-NN architecture, all the nodes in one layer are connected to the nodes in the next layer. The data are inputted into the first layer of the neural network, where individual neurons pass the data to a second layer. The second layer of neurons does its task, and so on, until the final layer. Each neuron assigns a weight to its input. Once all the input weights flow out of the neuron, they are summed, and biases are added, which help offset the output. These parameters are tuned by optimization during training, that is, compute the error of classification, also called loss, and then tune the weights and biases over many iterations to minimize this loss. The goal of neural networks is to adjust their weights and biases so that they can produce the desired output when applied to new unseen data. One of the common problems when training the network is overfitting (also called generalization error) of the dataset, i.e., Instead of learning, it memorizes the data. To avoid it, one needs to use regularization, i.e., early stopping with dropout layers and changing the network structure and parameters (weight constraint) [66]. A dropout function added to the network helps to disable the neurons randomly. This forces the network to learn how to make accurate predictions with only randomly left neurons, helping the network to prevent overfitting. For further details, see [67, 68].
Figure 1.
The architecture of a typical CNN consisting of a convolutional layer, a max pooling layer, and a fully connected layer.
Figure 2.
Pigment mockup; P1: Veridian, P2: Cerulean Blue, P3: Green Earth, P4: Yellow Ochre Light, P5: Burnt Umber, P6: Ultramarine Blue Deep, P7: Lead White Hue, P8: Genuine Vermilion, P9: Cobalt Blue Deep and P10: Ivory Black.
3.11
One-dimensional Convolutional Neural Network (1D-CNN)
CNN is one of the most popular neural networks used for various computer vision and machine learning tasks [6971]. CNN architecture is built using three main layers: convolutional layer, pooling layer, and fully connected layer. As the name suggests, the convolutional layer performs the linear operation between matrices, that is, convolution between the input neurons and kernel, generating an output activation map. For 1D-CNN, only 1D convolution is performed, that is, scalar multiplications and additions. In this layer, the number of weights is equal to the size of the kernel and does not depend on the input neuron, as in FC-NN. The feature map generated from this layer is passed through pooling a layer which helps to reduce the dimension of the feature map while maintaining the most important information. This helps to introduce translation invariance and reduces overfitting. A fully connected layer takes the output of the pooling layers, flattens them, and turns them into one long vector that can be an input for the next stage, where it applies weights to predict the correct label, and finally outputs the probabilities for each class using the activation function. Figure 1 shows the architecture of a general CNN [72].
4.
Materials and Methods
In this section, we describe the mockup and the HSI acquisition laboratory setup, details on the data post-processing steps, and classification algorithms.
4.1
Test Object
As shown in Figure 2, a pigment mockup was prepared and used in a laboratory environment. We used pigment tubes composed of high-stability pigments and oil, purchased from Zecchi [73]. The pigments were selected on the basis of the popularity in CH research articles, their spectrum characteristics, and in consultation with experts. Veridian (V), Cerulean Blue (CB), Green Earth (GE), Yellow Ochre Light (YOL), Burnt Umber (BU), Ultramarine Blue Deep (UBD), Lead White Hue (LWH), Genuine Vermilion (GV), Cobalt Blue Deep (CBD), and Ivory Black (IB) are the pigments that are being used in the mockup. The linen canvas used was primed using three layers of white gesso.
4.2
Experimental Setup
Hyperspectral data were obtained in a laboratory environment using the HySpex line scanner VNIR-1800 from Norsk Electro Optikk [74]. The datacube obtained covers a spectral range from 400 to 1000 nm with 186 spectral bands having a spectral resolution of 3.26 nm. In this experiment, a close-range 30 cm lens was used; it captures 1800 spatial pixels across a linear field of view of approximately 86 mm. A translation stage setup was used where the pigment mockup was kept lying on a horizontal surface. The standard multistep reference target from Spectralon [75] consisting of four shades of 99, 50, 25, and 12% reflectance values was kept along with the mockup during acquisition. This reference target with a known reflectance factors is used for computing the normalized reflectance at the pixel level.
4.3
Data Processing
The obtained raw hyperspectral data was post-processed for radiometric calibration using the HySpex RAD software, which removes electronics noise, i.e., dark current, and converts the raw images to the sensor absolute radiance values. Illumination correction, i.e., spatial variability in illumination, was performed with the help of the standard reference target. Further data processing steps are different for supervised and ML-based classification and are explained in the following sections.
4.3.1
Data Processing for Supervised Classification
To build a spectral library, a region of interest of approximate size equal to that of the patches (10 × 10 mm) was considered, and the mean spectra from these regions were saved in the library. To evaluate the performance of classification, a confusion matrix was computed. The overall methodology is illustrated using a block diagram in Figure 3. All data processing steps were computed using the open-source software Spectralpython [76].
Figure 3.
Workflow diagram for data processing.
Figure 4.
A snippet of a mockup with ten pigments and substrate; Colors are approximated as RGB rendering using spectral python for bands 75, 46, and 19 of HSI datasets.
Selecting the appropriate threshold value for classification algorithms is critical as it may vary depending upon the application. For example, Li et al. [33] pointed that the region for selecting the threshold value for SAM to be 0.1 for citrus disease detection analysis because, during the preliminary testing, they found that at a value of 0.15, many false positives result. A similar empirical approach has been followed by Júnior et al. [29], and Fung et al. [77]. Thus we also computed the optimal threshold for each of these algorithms through empirical observation. First, we selected a small segment of the HSI dataset of a mockup, as shown in Figure 4. Next, the reference spectrum was extracted from a mockup’s flat region by taking an average of 11 × 11 pixels. Finally, we computed the classification task for all algorithms with different threshold values and evaluated their accuracy using the confusion matrix.
In CH applications such as pigment classification for a painting, misclassification, i.e., the pigment being classified as the wrong pigment, is even more crucial than a pigment being unclassified. Hence, there should be the minimum error for any given classification algorithm. Therefore, we considered the classification accuracy for pigment classified as correct pigment (P_P), misclassification (MC_), pigment classified as unknown (P_UN_), unknown classified as a pigment (UN_P_), and unknown classified as unknown (UN_UN_). Figure 5 shows the graph for these parameters over accuracy for the SID algorithm, and we can observe that for threshold values between 0.01 and 0.03, the accuracy for pigment classified as pigment and unknown classified as unknown is high. Also, for misclassification value in the range of 0.1–0.3, pigments that are classified as unknown is minimum, and unknown classified as unknown is relatively high and constant. A similar conclusion can be drawn by visualizing the classification result shown in Figure 6. An optimal threshold value used for different algorithms in our experiment is mentioned in Table II and graph for each of the algorithms is attached in Appendix B.
Figure 5.
Graph for accuracy of five parameters used to determine the optimal threshold value for the SID algorithm.
Figure 6.
Classification result for ten pigments patches obtained using SID algorithm for a different set of threshold values.
Table II.
The selected threshold value for eight different classification algorithms.
AlgorithmsThreshold value
ED0.9
SAM0.1
SCM0.8
SID0.03
SSS1.1
SID-SAM0.003
SID-SCM0.005
JM- SAM0.09
4.3.2
Data Processing for ML Classification
The obtained normalized reflectance HSI data needs to process before it is fed to the model; data was labeled for different classes using the label encoder. For our dataset, we used one hot encoder, meaning for each class, one value is hot (i.e., the value of 1), and the rest are cold (i.e., the value of 0). We divided the dataset into training and testing. With an 80-20 split, data was further normalized. We then build and implement the model; first training dataset is used to train the model, neural network weights and biases of neurons are updated with each epoch till we got considerably minimum MSE and higher accuracy. Finally, the test dataset is used to validate the model. A block diagram in Figure 7 illustrates an overall workflow. Training spectra of 10 pigments and a substrate, plotted over a spatial region of approx. 100 × 100 pixels with 186 spectral bands is attached in Appendix A.
Figure 7.
Workflow diagram for ML data processing.
SVM model was implemented in Python using the Sklearn library. Among the differences, we tuned our model for three key hyperparameters, namely kernel types, regularization, and gamma, using the Python library called GridSearchCV. This function cross-validates the model to avoid overfitting using k-fold cross-validation and then computes a grid to evaluate the performance of each combination of given hyperparameters. Table III shows the details of hyperparameters.
Table III.
SVM key hyperparameters, the range used for tuning, and the optimum value selected for classification; RBF: Gaussian Kernel Radial Basis Function.
HyperparameterRange usedOptimum value selected
Kernel“Polynomial”, “RBF”, “Sigmoid”, “Linear”RBF
Regularization0.1, 1, 10, 100, 1000100
Gamma1, 0.1, 0.01, 0.0011
k-fold55
For FC-NN, we build a sequential model with three dense layers, the first layer with 32 nodes and hyperbolic tangent (tanh) as activation function followed by batch normalization. The second layer has 16 nodes tanh activation function followed by batch normalization and dropout, and the third layer has 11 nodes and a softmax activation function. The activation function introduces the non-linearity into the networks so that the networks can learn the relationship between the input and output. Tangent hyperbolic is a non-linear function with an s-shaped graph with output ranges from − 1 to 1. One reason for using the tanh function is that it is zero-centred, which makes the optimization icon process much more manageable. The softmax activation function converts a value vector to a probability distribution and is used in the output layer of multiclass classification. For details on the activation function, please refer [78]. For multiclass classification, the categorical cross-entropy loss function is usually used, and optimization algorithms, which are used to update weights and biases; we used adaptive moment estimation (Adam), as it is the best among the adaptive optimizers in most of the cases [79, 80]. The network architecture used for our experiment is shown in Figure 8. The model was implemented in Python using Keras, a neural network application programming interface.
Figure 8.
The architecture of the FC-NN classifier used in our experiment.
The proposed 1D-CNN model was tuned for hyperparameters using KerasTuner [81]. We tuned the model for the number of convolutional layers, their filter size, dropout, dense layer filer size, learning rate and epoch. Figure 9 illustrates the block diagram of the tuned model with its hyperparameter used. We used Adam as an optimizer with a learning rate of 0.001 and categorical cross-entropy as the loss function.
Figure 9.
The architecture of tuned 1D-CNN model.
5.
Result and Discussion
This section will look in detail at the classified image, the accuracy obtained for each pigment, and the overall accuracy of the algorithms used. Figure 10 shows the classification accuracy of each pigment for the different algorithms. The classification result for each of these algorithms is attached in Appendix D. We can observe that the average accuracy (average of 10 pigments) is high for all three machine learning algorithms. Of these three, FC-NN has the highest accuracy, followed by 1D-CNN and SVM. For the eight supervised algorithms used, SCM and SAM have high accuracy, followed by SID, SID-SAM, SID-SCM, and SSS. ED and JM-SAM have the lowest classification accuracy.
Figure 10.
Classification accuracy for each pigment for all 11 algorithms used; average represents the accuracy for an average of 10 pigments for a given algorithm.
Apart from machine learning algorithms, the other eight algorithms used have difficulty classifying pigment 6 (P6) and pigment 9 (P9). We can see in Figure 11 that spectra for both these pigments are similar and have little difference in magnitude. This is a common issue with supervision-based classification algorithms [9, 23]. In distance-based algorithms, ED, SSS, and JM-SAM, the classification accuracy for similar spectra (P6 & P9) are the lowest. We also observed that the classification accuracy is low for these distance-based algorithms, particularly for pigment 7 (White Hue), which has a spectrum similar to the substrate (S), since it is misclassified as substrate, as shown in the confusion matrix in Figure 12. Pigment 10, as shown in Fig. 11, has a reflectance value below 0.05 for almost the entire wavelength region (450–1000 nm), and it seems that the low magnitude value has an influence on the classification accuracy for supervised-based algorithms. Spectra for all pigments and substrates used are provided in Appendix C.
Figure 11.
Normalized reflectance spectra for pigment, used as a reference for supervised classification; P6, P7, P9, P10, and S represent pigments 6, 7, 9, 10, and substrate, respectively.
Figure 12.
Confusion matrix of (a) ED, (b) SSS, and (c) JM-SAM.
Figure 13.
Classification results for pigment P1 (in red), P2 (in green), and P3 (in blue). (a), (b), and (c) are obtained using algorithms SID, SID-SAM, and SID-SCM, respectively.
Classification accuracy for algorithm SID and its hybrid combinations (SID-SAM and SID-SCM) are lower for pigments P1 and P3. Figure 13 shows the classification result for pigments P1, P2, and P3 for SID, SID-SAM, and SID-SCM. Black color represents the unclassified pixels, and we can observe that all three algorithms have similar areas that have not been classified for P1 and P3. From the confusion matrix shown in Figure 14, we can see that for P1 and P3, the unclassified (UC) percentage is the second highest value in all three algorithms.
Figure 14.
Confusion matrix; (a): SID, (b): SID-SAM, and (c): SID-SCM.
Figure 15.
Spectra for pigment P1 and P3; solid red line (P1 Ref.) and solid blue line (P3 Ref.) are reference spectra for P1 and P3, respectively; red dashed line (P1 C) and solid green line (P1 UC) are spectra for classified and unclassified pixels of pigment P1; solid orange line (P3 C) and solid black line (P3 UC) are spectra for classified and unclassified pixels of pigment P3; dashed blue (P2 Ref.) and solid grey (P2 C) are spectra for reference and classified pixels of pigment 2.
Figure 15 shows the spectra for reference, classified pixels, and unclassified pixels for pigments P1, P2, and P3. It can be observed that there is a difference in spectra in the range of 800–1000 nm. The solid red line represents the reference spectrum, whereas red dash lines are spectra for classified pixels, and solid green lines are for unclassified pixels for P1. Similarly, the solid blue line is a reference spectrum for P3, and solid orange and solid black lines are spectra for classified and unclassified pixels, respectively. We also plotted the range for P2, which is mostly classified. Dashed blue line is a reference spectrum for P2, and solid grey lines are spectra for classified pixels.
The SID algorithm uses a divergence measure to match the reference and target pixels; the smaller the divergence value, the more likely the pixels are similar. We have used a threshold of 0.03, meaning that pixels with a value less than 0.03 will only be classified, and a value greater than the threshold will not be classified. We computed the divergence value for a spectrum of classified and unclassified pixels with a reference spectrum for P1, P2, and P3. The spectra used in the calculation are shown in Figure 16. The computed divergence is shown in Table IV. We can see that spectra that are not classified in the case of P1 and P3 have divergence values greater than a threshold. We can change this value to get more pixels classified, but this will result in higher misclassification and increase the unknown classified as a pigment, as shown in Fig. 5.
Figure 16.
Spectrum of P1, P2, and P3; Ref., C and UC represent reference, classified and unclassified, respectively.
Table IV.
SID value computed between a spectrum of reference pixels with that of classified and unclassified pixels for P1, P2, and P3. Remark indicates that either obtained SID value is smaller or greater than a used threshold value of 0.03.
SpectraSID valueRemark
P1 Ref. & P1 C0.005 <0.03
P1 Ref. & P1 UC0.052 >0.03
P2 Ref. & P2 C0.003 <0.03
P3 Ref. & P3 C0.014 <0.03
P3 Ref. & P3 UC0.034 >0.03
6.
General Discussion
Experimental results show that ML algorithms outperform the supervision-based algorithms used. The limitation of supervision-based algorithms used is that they cannot perform well if pigments have nearly identical spectra (P6 and P9) and also if the magnitude of the spectrum is very low (P10, reflectance factor below 0.05). We found for nearly identical spectrum, SCM is a better measure than the SAM, and this could be because SCM considers value from − 1 to 1 whereas the cosine of SAM only varies from 0 to 1. Apart from pigments P1 and P3, we found that the SID’s hybrid approach with SAM and SCM has almost similar results for our dataset. Due to the threshold value selected for classification, the accuracy for P1 and P3 is lower than for other pigments, i.e., in SID for P1 and P3 threshold value should be greater than 0.3 as mentioned in Table IV. The classification accuracy of algorithms based on spectral distance, such as ED, SSS, and JM-SAM was the lowest. This could be because these algorithms misclassified in-between white pigment (P7) and substrate (S), which is not the case for other supervised algorithms.
ML-based algorithms need to be trained for which we need a large amount of data. Classification result depends upon how well the model is trained, i.e., how large the training datasets are so that model can learn enough distinct features. For ML-based algorithms to perform well and avoid overfitting of a model, it needs to be tuned for the appropriate value of different hyperparameters, which will take a long computing time. This adds to the cost of computational time and complexity for ML-based algorithms. On the other hand, supervision-based algorithms do not require such a training set and are simple and easy to compute. Therefore, for the pigments with less complex spectra (i.e., having less identical spectrum), supervision-based algorithms such as SCM and SAM might be a good fit for the classification task.
7.
Conclusion
HSI is a non-invasive imaging technique used for the documentation and analysis of artwork for various tasks, such as pigment classification. It is essential as it assists conservators and curators in precisely analyzing an object and its historical value. In this paper, we evaluated the spectral processing algorithms for pigment classification of a mockup using HSI. We analyzed eight spectral image classification algorithms, i.e., ED, SAM, SCM, SID, SSS, SID-SAM, SID-SCM, JM-SAM, and three machine learning-based algorithms, SVM, FC-NN, 1D-CNN for its classification accuracy. In general, machine learning algorithms outperformed the others. Supervision-based algorithms work well for the pigments if their spectra are very distinct in shape from each other. Still, these algorithms have poor performance for pigments having a similar spectrum (nearly identical) or spectrum with just a change in magnitude. However, machine learning-based algorithms can overcome this limitation by extracting the features from each training sample and thus perform better for pigment classification. During our experiment, we trained the network for ten pigments. However, extending the model’s scope to include a more extensive range of pigments would be beneficial. Additionally, exploring diverse scenarios, such as mixed and aged pigments, would be beneficial; therefore, one can conduct more comprehensive research in the future. By doing so, we can refine the supervised algorithms and machine learning models mentioned earlier to be more applicable to real-world cases in cultural heritage.
Acknowledgment
This work is carried out at the Norwegian Colour and Visual Computing Laboratory (Colourlab), within the Department of Computer Science (IDI), as part of the CHANGE (Cultural Heritage Analysis for New Generations) project. And has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 813789.
Appendix A.
Reflectance Spectra of 10 Pigments and Substrate Used to Train SVM, FC-NN and 1D-CNN
Figure A.1.
Training spectra of 10 pigments and a substrate, plotted over spatial region of approximately 100 × 100 pixels with 186 spectral bands; Viridian (V), Cerulean Blue (CB), Green Earth (GE), Yellow Ochre Light (YOL), Burnt Umber (BU), Ultramarine Blue Deep (UBD), Lead White Hue (LWH), Genuine Vermilion (GV), Cobalt Blue Deep (CBD), Ivory Black (IB), and Substrate (S).
Appendix B.
Graph Used for Determining the Optimal Threshold Value for Different Algorithms
Figure B.1.
Classification accuracy graph of different algorithms at varying threshold values. The graph shows the accuracy of each algorithm in terms of pigment classified as a pigment (P_P_), unknown region classified as unknown (UN_UN_), pigment classified as unknown (P_UN_), unknown classified as a pigment (UN_P_) and pigment classifying as another pigment, i.e., misclassification (MC_).
Appendix C.
Normalized Reflectance Spectrum of 10 Pigments and Substrate
Figure C.1.
Normalized reflectance spectrum for ten pigments and a substrate; P1: Veridian, P2: Cerulean Blue, P3: Green Earth, P4: Yellow Ochre Light, P5: Burnt Umber, P6: Ultramarine Blue Deep, P7: Lead White Hue, P8: Genuine Vermilion, P9: Cobalt Blue Deep, P10: Ivory Black, and S: Substrate.
Appendix D.
Classification Result for All Used Algorithms
Figure D.1.
Classification results obtained using various supervised and machine-learning algorithms; Euclidean Distance (ED), Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), Spectral Information Divergence (SID), Spectral Similarity Scale (SSS), Jeffries Matusita-Spectral Angle Mapper (JMSAM), Support Vector Machine (SVM), Fully Connected Neural Network (FC-NN) and One-dimensional Convolutional Neural Network (1D-CNN).
References
1ShawG.ManolakisD.2002Signal processing for hyperspectral image exploitationIEEE Signal Process. Mag.19121612–610.1109/79.974715
2TanL.HouM.-l.2016A study on the application of SAM classification algorithm in seal of calligraphy and painting based on hyperspectral technology2016 4th Int’l. Workshop on Earth Observation and Remote Sensing Applications (EORSA)415418415–8IEEEPiscataway, NJ10.1109/EORSA.2016.7552841
3BalasC.EpitropouG.TsaprasA.HadjinicolaouN.2018Hyperspectral imaging and spectral classification for pigment identification and mapping in paintings by el greco and his workshopMultimedia Tools Appl.77973797519737–5110.1007/s11042-017-5564-2
4TripathiM. K.GovilH.2019Evaluation of AVIRIS-NG hyperspectral images for mineral identification and mappingHeliyon5e0293110.1016/j.heliyon.2019.e02931
5MelganiF.BruzzoneL.2004Classification of hyperspectral remote sensing images with support vector machinesIEEE Trans. Geosci. Remote Sens.42177817901778–9010.1109/TGRS.2004.831865
6LiuN.GuoY.JiangH.YiW.2020Gastric cancer diagnosis using hyperspectral imaging with principal component analysis and spectral angle mapperJ. Biomed. Opt.2506600510.1117/1.JBO.25.6.066005
7ZhiL.ZhangD.YanJ.-q.LiQ.-L.TangQ.-l.2007Classification of hyperspectral medical tongue images for tongue diagnosisComput. Med. Imaging Graph.31672678672–810.1016/j.compmedimag.2007.07.008
8ParkB.WindhamW. R.LawrenceK. C.SmithD. P.2007Contaminant classification of poultry hyperspectral imagery using a spectral angle mapper algorithmBiosyst. Eng.96323333323–3310.1016/j.biosystemseng.2006.11.012
9DevassyB. M.GeorgeS.2020Contactless classification of strawberry using hyperspectral imagingCEUR Workshop Proc.CEUR-WS.orgAachen
10ParkB.WindhamW. R.LawrenceK. C.SmithD. P.2007Contaminant classification of poultry hyperspectral imagery using a spectral angle mapper algorithmBiosyst. Eng.96323333323–3310.1016/j.biosystemseng.2006.11.012
11DeepthiD.DevassyB. M.GeorgeS.NussbaumP.ThomasT.2022Classification of forensic hyperspectral paper data using hybrid spectral similarity algorithmsJ. Chemom.36e338710.1002/cem.3387
12van der WeerdJ.van LoonA.BoonJ. J.2005Ftir studies of the effects of pigments on the aging of oilStud. Conserv.503223–2210.1179/sic.2005.50.1.3
13CosentinoA.2014Fors spectral database of historical pigments in different bindersE-conserv. J.2576857–6810.18236/econs2.201410
14CavaleriT.GiovagnoliA.NervoM.2013Pigments and mixtures identification by visible reflectance spectroscopyProcedia Chem.8455445–5410.1016/j.proche.2013.03.007
15SaundersD.KirbyJ.2004The effect of relative humidity on artists’ pigmentsNatl. Gallery Tech. Bull.25627262–72 http://www.nationalgallery.org.uk/technical-bulletin/saunders_kirby2004
16LyuS.YangX.PanN.HouM.WuW.PengM.ZhaoX.2020Spectral heat aging model to estimate the age of seals on painting and calligraphyJ. Cult. Herit.46119130119–3010.1016/j.culher.2020.08.005
17MassJ. L.OpilaR.BuckleyB.CotteM.ChurchJ.MehtaA.2013The photodegradation of cadmium yellow paints in henri matisse’s le bonheur de vivre (1905–1906)Appl. Phys. A111596859–6810.1007/s00339-012-7418-0
18CuttleC.1996Damage to museum objects due to light exposureInt. J. Ligh. Res. Technol.28191–910.1177/14771535960280010301
19SimonotL.EliasM.2003Color change due to surface state modificationColor Res. Appl.28454945–910.1002/col.10113First published: 30 December 2002
20ShindeP. P.ShahS.2018A review of machine learning and deep learning applications2018 Fourth Int’l. Conf. on Computing Communication Control and Automation (ICCUBEA)161–6IEEEPiscataway, NJ10.1109/ICCUBEA.2018.8697857
21CaiL.GaoJ.ZhaoD.2020A review of the application of deep learning in medical image classification and segmentationAnn. Transl. Med.871310.21037/atm.2020.02.44
22ZhiL.ZhangD.YanJ.-q.LiQ.-L.TangQ.-l.2007Classification of hyperspectral medical tongue images for tongue diagnosisComput. Med. Imaging Graph.31672678672–810.1016/j.compmedimag.2007.07.008
23ShivakumarB. R.RajashekararadhyaS. V.2017Performance evaluation of spectral angle mapper and spectral correlation mapper classifiers over multiple remote sensor data2017 Second Int’l. Conf. on Electrical, Computer and Communication Technologies (ICECCT)161–6IEEEPiscataway, NJ10.1109/ICECCT.2017.8117946
24de CarvalhoO. A.MenesesP. R.2000Spectral correlation mapper (SCM): an improvement on the spectral angle mapper (SAM)Summaries of the 9th JPL Airborne Earth Science Workshop, JPL Publication 00-18Vol. 9JPL Publication PasadenaCA
25QinJ.BurksT. F.RitenourM. A.BonnW. G.2009Detection of citrus canker using hyperspectral reflectance imaging with spectral information divergenceJ. Food Eng.93183191183–9110.1016/j.jfoodeng.2009.01.014
26DevassyB. M.GeorgeS.HardebergJ. Y.2019Comparison of ink classification capabilities of classic hyperspectral similarity features2019 Int’l. Conf. on Document Analysis and Recognition Workshops (ICDARW)Vol. 8253025–30IEEEPiscataway, NJ10.1109/ICDARW.2019.70137
27MalilaW. A.1980Change vector analysis: An approach for detecting forest changes with landsatLARS Symposia326335326–35IEEEPiscataway, NJ
28ChenJ.GongP.HeC.PuR.ShiP.2003Land-use/land-cover change detection using improved change-vector analysisPhotogramm. Eng. Remote Sens.69369379369–7910.14358/PERS.69.4.369
29de Carvalho JúniorO.GuimarãesR.GillespieA.SilvaN.GomesR.2011A new approach to change vector analysis using distance and similarity measuresRemote Sens.3247324932473–9310.3390/rs3112473
30DuY.ChangC.-I.RenH.ChangC.-C.JensenJ. O.D’AmicoF. M.2004New hyperspectral discrimination measure for spectral characterizationOpt. Eng.43177717861777–8610.1117/1.1805563
31KumarM. N.SeshasaiM. V. R.PrasadK. S. V.KamalaV.RamanaK. V.DwivediR. S.RoyP. S.2011A new hybrid spectral similarity measure for discrimination among vigna speciesInt. J. Remote Sens.32404140534041–5310.1080/01431161.2010.484431
32ZhangM.QinZ.LiuX.UstinS. L.2003Detection of stress in tomatoes induced by late blight disease in California, USA, using hyperspectral remote sensingInt. J. Appl. Earth Obs. Geoinf.4295310295–31010.1016/S0303-2434(03)00008-4
33LiH.LeeW. S.WangK.EhsaniR.YangC.2014Extended spectral angle mapping (ESAM)’ for citrus greening disease detection using airborne hyperspectral imagingPrecis. Agric.15162183162–8310.1007/s11119-013-9325-6
34DabboorM.HowellS.ShokrM.YackelJ.“The Jeffries-Matusita distance for the case of complex wishart distribution as a separability criterion for fully polarimetric SAR data,” Int. J. Remote Sens. 35, 6859–6873 (2014)
35UllahS.GroenT. A.SchlerfM.SkidmoreA. K.NieuwenhuisW.VaiphasaC.2012Using a genetic algorithm as an optimal band selector in the mid and thermal infrared (2.5–14 μm) to discriminate vegetation speciesSensors12875587698755–6910.3390/s120708755
36VenkataramanS.BjerkeH.CopenhaverK.GlaserJ.2006Optimal band selection of hyperspectral data for transgenic corn identificationMAPPS/ASPRS 2006 Fall Conf.6106–10ASPRSBaton Rouge, LA
37BruzzoneL.RoliF.SerpicoS. B.1995An extension of the Jeffreys-Matusita distance to multiclass cases for feature selectionIEEE Trans. Geosci. Remote Sens.33131813211318–2110.1109/36.477187
38DeborahH.RichardN.HardebergJ. Y.2014On the quality evaluation of spectral image processing algorithms2014 Tenth Int’l. Conf. on Signal-Image Technology and Internet-Based Systems133140133–40IEEEPiscataway, NJ10.1109/SITIS.2014.50
39RomeroJ.Garcia-BeltránA.Hernández-AndrésJ.1997Linear bases for representation of natural and artificial illuminantsJ. Opt. Soc. Am. A14100710141007–1410.1364/JOSAA.14.001007
40DeborahH.RichardN.HardebergJ.2015A comprehensive evaluation of spectral distance functions and metrics for hyperspectral image processingIEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.8322432343224–3410.1109/JSTARS.2015.2403257
41KREMER Pigments, Books & Color Charts. Accessed: 6 October 2022
42HeroldM.GardnerM. E.RobertsD. A.2003Spectral resolution requirements for mapping urban areasIEEE Trans. Geosci. Remote Sens.41190719191907–1910.1109/TGRS.2003.815238
43MiaoX.GongP.SwopeS.PuR.CarruthersR.“Detection of yellow starthistle through band selection and feature extraction from hyperspectral imagery,” Photogramm. Eng. Remote Sens. 73, 1005–1015 (2007)
44MewesT.FrankeJ.MenzG.2011Spectral requirements on airborne hyperspectral remote sensing data for wheat disease detectionPrecis. Agric.12795812795–81210.1007/s11119-011-9222-9
45GualtieriJ. A.ChettriS.2000Support vector machines for classification of hyperspectral dataIGARSS 2000. IEEE 2000 Int’l. Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proc. (Cat. No. 00CH37120)Vol. 2813815813–5IEEEPiscataway, NJ10.1109/IGARSS.2000.861712
46PouyetE.MitevaT.RohaniN.de ViguerieL.2021Artificial intelligence for pigment classification task in the short-wave infrared rangeSensors21615010.3390/s21186150
47SweetJ. N.2003The spectral similarity scale and its application to the classification of hyperspectral remote sensing dataIEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003929992–9IEEEPiscataway, NJ10.1109/WARSD.2003.1295179
48KruseF. A.LefkoffA. B.BoardmanJ. W.HeidebrechtK. B.ShapiroA. T.BarloonP. J.GoetzA. F. H.1993The spectral image processing system (sips)—interactive visualization and analysis of imaging spectrometer dataRemote Sens. Environ.44145163145–6310.1016/0034-4257(93)90013-NAirbone Imaging Spectrometry
49MandalD. J.GeorgeS.PedersenM.BoustC.2021Influence of acquisition parameters on pigment classification using hyperspectral imagingJ. Imaging Sci. Technol.2021334346334–4610.2352/J.ImagingSci.Technol.2021.65.5.050406
50AdepR. N.VijayanA. P.ShettyA.RameshH.2016Performance evaluation of hyperspectral classification algorithms on AVIRIS mineral dataPerspectives Sci.8722726722–610.1016/j.pisc.2016.06.070Recent Trends in Engineering and Material Sciences
51ChangC.-I.1999Spectral information divergence for hyperspectral image analysisIEEE 1999 Int’l. Geoscience and Remote Sensing Symposium. IGARSS’99 (Cat. No.99CH36293)Vol. 1509511509–11IEEEPiscataway, NJ10.1109/IGARSS.1999.773549
52DeborahH.GeorgeS.HardebergJ. Y.2014Pigment mapping of the scream (1893) based on hyperspectral imagingInt’l. Conf. on Image and Signal Processing247256247–56SpringerCham10.1007/978-3-319-07998-1_28
53ZhangE.ZhangX.YangS.WangS.2013Improving hyperspectral image classification using spectral information divergenceIEEE Geosci. Remote Sens. Lett.11249253249–5310.1109/LGRS.2013.2255097
54GranahanJ. C.SweetJ. N.2001An evaluation of atmospheric correction techniques using the spectral similarity scaleIGARSS 2001. Scanning the Present and Resolving the Future. Proc. IEEE 2001 Int’l. Geoscience and Remote Sensing Symposium (Cat. No. 01CH37217)Vol. 5202220242022–4IEEEPiscataway, NJ10.1109/IGARSS.2001.977890
55KrauzL.PátaP.KaiserJ.2022Assessing the spectral characteristics of dye-and pigment-based inkjet prints by VNIR hyperspectral imagingSensors2260310.3390/s22020603
56PadmaS.SanjeeviS.2014Jeffries-Matusita-Spectral Angle Mapper (JM-SAM) spectral matching for species level mapping at Bhitarkanika, Muthupet and Pichavaram mangrovesInt. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.40140314111403–1110.5194/isprsarchives-XL-8-1403-2014
57GualtieriJ. A.ChettriS.2000Support vector machines for classification of hyperspectral dataIGARSS 2000. IEEE 2000 Int’l. Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proc. (Cat. No. 00CH37120)Vol. 2813815813–5IEEEPiscataway, NJ10.1109/IGARSS.2000.861712
58HamzaM. A.AlzahraniJ. S.Al-RasheedA.AlshahraniR.AlamgeerM.MotwakelA.YaseenI.EldesoukiM. I.2022Optimal and fully connected deep neural networks based classification model for unmanned aerial vehicle using hyperspectral remote sensing imagesCan. J. Remote Sens.48681693681–9310.1080/07038992.2022.2116566
59RieseF.KellerS.2019Soil texture classification with 1D convolutional neural networks based on hyperspectral dataISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci.IV-2/W5615621615–2110.5194/isprs-annals-IV-2-W5-615-2019
60ChangC.-I.2000An information-theoretic approach to spectral variability, similarity, and discrimination for hyperspectral image analysisIEEE Trans. Inf. Theory46192719321927–3210.1109/18.857802
61SweetJ.GranahanJ.SharpM.2000An objective standard for hyperspectral image qualityProc. AVIRIS WorkshopJPLPasadena, CA
62KerekesJ. P.CiszA. P.SimmonsR. E.2005A comparative evaluation of spectral quality metrics for hyperspectral imageryProc. SPIE 5806469480469–8010.1117/12.605916
63ZhangJ.ZhangY.ZhouT.2001Classification of hyperspectral data using support vector machineProc. 2001 Int’l. Conf. on Image Processing (Cat. No. 01CH37205)Vol. 1882885882–5IEEEPiscataway, NJ10.1109/ICIP.2001.959187
64MelganiF.BruzzoneL.2004Classification of hyperspectral remote sensing images with support vector machinesIEEE Trans. Geosci. Remote Sens.42177817901778–9010.1109/TGRS.2004.831865
65DingS.ChenL.2009Classification of hyperspectral remote sensing images with support vector machines and particle swarm optimization2009 Int’l. Conf. on Information Engineering and Computer Science151–5IEEEPiscataway, NJ10.1109/ICIECS.2009.5363456
66JabbarH.KhanR. Z.2015Methods to avoid over-fitting and under-fitting in supervised machine learning (comparative study)Comput. Sci. Commun. Instrum. Devices70
67BauD.ZhuJ.-Y.StrobeltH.LapedrizaA.ZhouB.TorralbaA.2020Understanding the role of individual units in a deep neural networkProc. Natl. Acad. Sci.117300713007830071–810.1073/pnas.1907375117
68SchwingA. G.UrtasunR.“Fully connected deep structured networks”. arXiv preprint arXiv:1503.02351v1 [cs.CV] 9 Mar 2015 (2015)
69RonnebergerO.FischerP.BroxT.2015U-net: Convolutional networks for biomedical image segmentationInt’l. Conf. on Medical Image Computing and Computer-assisted Intervention234241234–41SpringerCham10.1007/978-3-319-24574-4_28
70YuS.JiaS.XuC.2017Convolutional neural networks for hyperspectral image classificationNeurocomputing219889888–9810.1016/j.neucom.2016.09.010
71HongD.GaoL.YaoJ.ZhangB.PlazaA.ChanussotJ.2021Graph convolutional networks for hyperspectral image classificationIEEE Trans. Geosci. Remote Sens.59596659785966–7810.1109/TGRS.2020.3015157
72HuW.HuangY.LiW.ZhangF.LiH.2015Deep convolutional neural networks for hyperspectral image classificationJ. Sensors20151121–1210.1155/2015/258619
73ZECCHI. Accessed: 10 June 2020
74Norsk Elektro Optikk. Accessed: 20 December 2020
75Spectralon multi-step targets. Accessed: 11 September 2020
76Welcome to spectral python (SPy). Accessed: 08 August 2020
77FungT.LeDrewE.1987Application of principal components analysis to change detectionPhotogramm. Eng. Remote Sens.53164916581649–58
78SzandałaT.2021Review comparison of commonly used activation functions for deep neural networksBio-inspired Neurocomputing203224203–24SpringerCham10.1007/978-981-15-5495-7_11
79KingmaD. P.BaJ.“Adam: A method for stochastic optimization”. arXiv preprint arXiv:1412.6980 (2014)
80YaqubM.FengJ.ZiaM. S.ArshidK.JiaK.RehmanZ. U.MehmoodA.2020State-of-the-art CNN optimizer for brain tumor segmentation in magnetic resonance imagesBrain Sci.1042710.3390/brainsci10070427
81O’MalleyT.BurszteinE.LongJ.CholletF.JinH.InvernizziL.Kerastuner (2019) Accessed: 5 December 2022