With advances in sensor technology, the availability of multispectral cameras and their use are increasing. Having more information compared to a three-channel camera has its advantages but the data must be handled appropriately. In this work, we are interested in multispectral camera characterization. We measure the camera characterization performance by two methods, by linear mapping and through spectral reconstruction. Linear mapping is used in 3-channel camera characterization and we use the same method for a multispectral camera. We also investigate whether instead of linear mapping, spectral reconstruction from the camera data improves the performance of color reproduction. The recovery of reflectance spectra is an under-determined problem and certain assumptions are required for obtaining a unique solution. Linear methods are generally used for spectral reconstruction from the camera data and are based on training on known spectra. These methods can perform well when the test data consists of a subset of the training spectra, however, their performance is significantly reduced when the test data is different. In this paper, we also investigate the role of training spectra for camera characterization. Five different spectral reflectance datasets are used for training the camera characterization models. Finally we provide a comparison between the linear mapping and spectral reconstruction methods for multispectral camera characterization and also test the camera characterization framework on hyperspectral images of natural scenes.
In order to investigate factors necessary for reproducing actual star images in a planetarium, for this article, the authors conducted a psychophysical experiment using projection stimuli generated by changing three parameters of the stars: color, luminance, and size. A reference projection pattern was designed to be faithful to the actual starry sky perceptually (rather than physically) by an experienced group with abundant astronomical observation experience. A reproduction system was constructed to project ten types of star image patterns to a planetarium dome using different parameters. Then, evaluation experiments with twenty observers were conducted. The results of the experiment indicate that the intensity of the stars was sensitive to the fidelity of the reproduction, and in either case of change (whether the star was bright or dark compared to the reference pattern), the result was a loss of fidelity. In addition, although the fidelity was improved when the size of the projected star was small, for stars that were projected larger than the reference pattern, the result was remarkably negative. As for differences in color, the evaluation results suggested that the tolerance to loss of fidelity was wide. © 2017 Society for Imaging Science and Technology. [DOI: 10.2352/J.ImagingSci.Technol.2017.61.6.060401]
This paper describes a method of improving the quality of the color in color images by colorizing them. In particular, color quality may suffer from improper white balance and other factors such as inadequate camera characterization. Colorization generally refers to the problem of turning a luminance image into a realistic looking color image and impressive results have been reported in the computer vision literature. Based on the assumption that if colorization can successfully predict colors from luminance data alone then it should certainly be able to predict colors from color data, the proposed method employs colorization to 'color' color images. Tests show that the proposed method quite effectively removes color casts—including spatially varying color casts—created by changes in the illumination. The colorization method itself is based on training a deep neural network to learn the connection between the colors in an improperly balanced image and those in a properly balanced one. Unlike many traditional white-balance methods, the proposed method is image-in-image-out and does not explicitly estimate the chromaticity of the illumination nor apply a von-Kries-type adaptation step. The colorization method is also spatially varying and so handles spatially varying illumination conditions without further modification.
There are currently no standards for characterization and calibration of the cameras used on unmanned aerial systems (UAS's). Without such standards, the color information in the images captured with these devices is not meaningful. By providing standard color calibration targets, code, and procedures, users will be empowered with the ability to obtain color images that can provide information valuable for agriculture, infrastructure, water quality, even cultural heritage applications. The objective of this project is to develop the test targets and methodology for color calibrating unmanned aerial vehicle cameras. We are working to develop application-specific color targets, the necessary code, and a qualitative procedure for conducting UAS camera calibration. To generate the color targets, we will be following approaches used in the development of ISO 17321-1: Graphic technology and photography — Colour characterisation of digital still cameras (DSCs) — Part 1: Stimuli, metrology and test procedures as well as research evaluating application-specific camera targets. This report reviews why a new industry standard is needed and the questions that must be addressed in developing a new standard.