We present a hybrid multi-line scan approach which enables simultaneous acquisition of light field & photometric stereo data. While light fields capture mostly large-scale surface deviations and rely on visible surface structures, photometric stereo is primarily sensitive to fine surface deviations and does not rely on visible structures. The combination of both approaches yields a solid performance for a large variety of depths, ranging from macro- to microscopic scales. Contrary to traditional photometric stereo, that relies on a strobed illumination, our approach uses two constant light sources which, however, generate multiple illumination geometries in different portions of the camera's field of view. Our object is moving on a conveyor belt during the acquisition process. Due to our multi-line scan sensor the object is observed from several viewing angles. The object's movement is causing each object point to be illuminated under several illumination directions. Hence, during our acquisition process the object points are captured under all feasible viewing angles and lighting conditions. In our system, surface normals are derived making use of the Lambert's cosine law. However, due to the lack of illuminations spanning orthogonally to the transport direction, the surface normals can be inferred only in the transport direction. We present a variational approach for 3D depth reconstruction designed specifically for our hybrid setup that jointly takes into account the light field as well as photometric stereo depth cues and provides one globally consistent solution. Depth maps obtained by the proposed algorithm show both the large-scale accuracy as well as sensitivity to fine surface details.
Doris Antensteiner, Svorad Štolc, Kristián Valentín, Bernhard Blaschitz, Reinhold Huber-Mörk, Thomas Pock, "High-Precision 3D Sensing with Hybrid Light Field & Photometric Stereo Approach in Multi-Line Scan Framework" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Intelligent Robotics and Industrial Applications using Computer Vision, 2017, pp 52 - 60, https://doi.org/10.2352/ISSN.2470-1173.2017.9.IRIACV-268