In complex scenes, the light reflected by surfaces causes secondary illumination, which contributes significantly to the actual light in the space (the "light field"). Secondary illumination is dependent on the primary illumination, geometry, and materials of a space. Hence, primary illumination and secondary illumination can have non-identical spectral properties, and render object colors differently. Lighting technology and research predominantly relies on the color rendering properties of the illuminant. Little attention has been given to the impact of secondary illumination on the "effective color rendering" within light fields. Here we measure the primary and secondary illumination for a simple spatial geometry and demonstrate empirically their differential "effective color rendering" properties. We found that color distortions due to secondary illumination from chromatic furnishing materials led to systematic and significant color shifts, and major differences between the lamp-specified color rendition and temperature and the actual light-based "effective color rendering" and "effective color temperature". On the basis of these results we propose a methodological switch from assessing the color rendering and temperature of illuminants only to assessing the "effective color rendering and temperature" in context too.
We present a hybrid multi-line scan approach which enables simultaneous acquisition of light field & photometric stereo data. While light fields capture mostly large-scale surface deviations and rely on visible surface structures, photometric stereo is primarily sensitive to fine surface deviations and does not rely on visible structures. The combination of both approaches yields a solid performance for a large variety of depths, ranging from macro- to microscopic scales. Contrary to traditional photometric stereo, that relies on a strobed illumination, our approach uses two constant light sources which, however, generate multiple illumination geometries in different portions of the camera's field of view. Our object is moving on a conveyor belt during the acquisition process. Due to our multi-line scan sensor the object is observed from several viewing angles. The object's movement is causing each object point to be illuminated under several illumination directions. Hence, during our acquisition process the object points are captured under all feasible viewing angles and lighting conditions. In our system, surface normals are derived making use of the Lambert's cosine law. However, due to the lack of illuminations spanning orthogonally to the transport direction, the surface normals can be inferred only in the transport direction. We present a variational approach for 3D depth reconstruction designed specifically for our hybrid setup that jointly takes into account the light field as well as photometric stereo depth cues and provides one globally consistent solution. Depth maps obtained by the proposed algorithm show both the large-scale accuracy as well as sensitivity to fine surface details.