A lighting-based multispectral imaging system using an RGB camera and a projector is one of the most practical and low-cost systems to acquire multispectral observations for estimating the scene's spectral reflectance information. However, existing projector-based systems assume that the spectral power distribution (SPD) of each projector primary is known, which requires additional equipment such as a spectrometer to measure the SPD. In this paper, we present a method for jointly estimating the spectral reflectance and the SPD of each projector primary. In addition to adopting a common spectral reflectance basis model, we model the projector's SPD by a low-dimensional model using basis functions obtained by a newly collected projector's SPD database. Then, the spectral reflectances and the projector's SPDs are alternatively estimated based on the basis models. We experimentally show the performance of our joint estimation using a different number of projected illuminations and investigate the potential of the spectral reflectance estimation using a projector with unknown SPD.
This paper proposes a compact and reliable method to estimate the bispectral Donaldson matrices of fluorescent objects by using multispectral imaging data. We suppose that an image acquisition system allows multiple illuminant projections to the object surface and multiple response channels in the visible range. The Donaldson matrix is modeled as a twodimensional array with the excitation range (350, 700 nm) and the reflection and emission ranges (400, 700 nm). The observation model is described using the spectral sensitivities of a camera and the spectral functions of reflectance, emission, and excitation. The problem of estimating the spectral functions is formulated as a least squares problem to minimize the residual error of the observations and the roughness of the spectral functions. An iterative algorithm is developed to obtain the optimal estimates of the whole spectral functions. The performance of the proposed method is examined in simulation experiments using multispectral imaging data in detail.
Spectral reconstruction (SR) algorithms attempt to map RGB- to hyperspectral-images. Classically, simple pixel-based regression is used to solve for this SR mapping and more recently patch-based Deep Neural Networks (DNN) are considered (with a modest performance increment). For either method, the 'training' process typically minimizes a Mean-Squared-Error (MSE) loss. Curiously, in recent research, SR algorithms are evaluated and ranked based on a relative percentage error, so-called MeanRelative-Absolute Error (MRAE), which behaves very differently from the MSE loss function. The most recent DNN approaches - perhaps unsurprisingly - directly optimize for this new MRAE error in training so as to match this new evaluation criteria.<br/> In this paper, we show how we can also reformulate pixelbased regression methods so that they too optimize a relative spectral error. Our Relative Error Least-Squares (RELS) approach minimizes an error that is similar to MRAE. Experiments demonstrate that regression models based on RELS deliver better spectral recovery, with up to a 10% increment in mean performance and a 20% improvement in worst-case performance depending on the method.
Digital Preservation has evolved from an early-stage field based heavily on research and the sharing of information to a nascent industry based on practical activity. In this transition there is a risk that the vital activity of sharing information and expertise declines in favor of the day-to-day practicalities of caring for content. This work explores how the Preservation Action Registries (PAR) Initiative can not only help to bridge the gap, but in doing so, create new opportunities that can help make automated digital preservation a practical reality even for non-expert users by describing a proof-of-principle demonstration of the automated application of Digital Preservation Policy, and subsequent changes to that policy.
The similarity analysis is a major issue in computer vision. This concept is denoted by a scalar which designates a distance measure giving the resemblance of two objects. Specifically, this distance is used in many areas such as image compression, image matching, biometrics, shape recognition, objects recognition, manufacturing industry, data analysis, etc. Several studies have shown that the choice of similarity measures depends on the type of data. This paper presents an evaluation of some similarity measures in the literature and a proposed similarity function taking into account image feature. The features concerned are textures and key-points. The data used in this study came from multispectral imaging by using visible and thermal infrared images.
Time domain continuous imaging (TDCI) models scene appearance as a set of continuous waveforms, each recording how the value of an individual pixel changes over time. When a set of timestamped still images is converted into a TDCI stream, pixel value change records are created based on when the pixel value becomes more different from the previous value than the value error model classifies as noise. Virtual exposures may then be rendered from the TDCI stream data for arbitrary time intervals by integrating the area under the pixel value waveforms. Using conventional cameras, multispectral and high dynamic range imaging both involve combining multiple exposures; the needed variations in exposure and/or spectral filtering generally skew the time periods represented by the component exposures or compromise capture quality in other ways. This paper describes a simple approach in which converting the image data to a TDCI representation is used to support generation of a higher-quality fusion of the separate captures.
We demonstrate the sufficiency of using as few as five LEDs of distinct spectra for color-accurate multispectral lighting reproduction and solve for the optimal set of five from 11 such commercially available LEDs. We leverage published spectral reflectance, illuminant, and camera spectral sensitivity datasets to show that two approaches of lighting reproduction, matching illuminant spectra directly and matching material color appearance observed by one or more cameras or a human observer, yield the same LED selections. Our proposed optimal set of five LEDs includes red, green, and blue with narrow emission spectra, along with white and amber with broader spectra.