Spectral reconstruction (SR) algorithms attempt to map RGB- to hyperspectral-images. Classically, simple pixel-based regression is used to solve for this SR mapping and more recently patch-based Deep Neural Networks (DNN) are considered (with a modest performance increment). For either method, the 'training' process typically minimizes a Mean-Squared-Error (MSE) loss. Curiously, in recent research, SR algorithms are evaluated and ranked based on a relative percentage error, so-called MeanRelative-Absolute Error (MRAE), which behaves very differently from the MSE loss function. The most recent DNN approaches - perhaps unsurprisingly - directly optimize for this new MRAE error in training so as to match this new evaluation criteria.<br/> In this paper, we show how we can also reformulate pixelbased regression methods so that they too optimize a relative spectral error. Our Relative Error Least-Squares (RELS) approach minimizes an error that is similar to MRAE. Experiments demonstrate that regression models based on RELS deliver better spectral recovery, with up to a 10% increment in mean performance and a 20% improvement in worst-case performance depending on the method.
With advances in sensor technology, the availability of multispectral cameras and their use are increasing. Having more information compared to a three-channel camera has its advantages but the data must be handled appropriately. In this work, we are interested in multispectral camera characterization. We measure the camera characterization performance by two methods, by linear mapping and through spectral reconstruction. Linear mapping is used in 3-channel camera characterization and we use the same method for a multispectral camera. We also investigate whether instead of linear mapping, spectral reconstruction from the camera data improves the performance of color reproduction. The recovery of reflectance spectra is an under-determined problem and certain assumptions are required for obtaining a unique solution. Linear methods are generally used for spectral reconstruction from the camera data and are based on training on known spectra. These methods can perform well when the test data consists of a subset of the training spectra, however, their performance is significantly reduced when the test data is different. In this paper, we also investigate the role of training spectra for camera characterization. Five different spectral reflectance datasets are used for training the camera characterization models. Finally we provide a comparison between the linear mapping and spectral reconstruction methods for multispectral camera characterization and also test the camera characterization framework on hyperspectral images of natural scenes.
In order to investigate factors necessary for reproducing actual star images in a planetarium, for this article, the authors conducted a psychophysical experiment using projection stimuli generated by changing three parameters of the stars: color, luminance, and size. A reference projection pattern was designed to be faithful to the actual starry sky perceptually (rather than physically) by an experienced group with abundant astronomical observation experience. A reproduction system was constructed to project ten types of star image patterns to a planetarium dome using different parameters. Then, evaluation experiments with twenty observers were conducted. The results of the experiment indicate that the intensity of the stars was sensitive to the fidelity of the reproduction, and in either case of change (whether the star was bright or dark compared to the reference pattern), the result was a loss of fidelity. In addition, although the fidelity was improved when the size of the projected star was small, for stars that were projected larger than the reference pattern, the result was remarkably negative. As for differences in color, the evaluation results suggested that the tolerance to loss of fidelity was wide. © 2017 Society for Imaging Science and Technology. [DOI: 10.2352/J.ImagingSci.Technol.2017.61.6.060401]