Multi-line scan systems have been introduced as linear light field cameras and subsequently for 3D ranging for industrial inline applications. Up to now there have been no viable calibration methods to determine intrinsic and extrinsic parameters of such a system which would allow (i) metric measurements and (ii) line-scan image geometric rectification. Our work closes the gap by exploiting special properties of a typical multi-line scan setup, which internally uses a fast area-scan sensor that can also be operated in the line-scan mode. This allows the use of standard calibration approaches to determine the intrinsic camera parameters. We introduce a novel method to compute extrinsic camera parameters w.r.t. the transport direction. Consecutively, the images are rectified for all constructed line-scan views. This takes into account estimated camera model parameters in order to generate an EPI-corrected linear light field that is suitable for accurate 3D reconstructions. Furthermore, we introduce a novel calibration target that is characteristic by an asymmetric central element as well as a tailored fast detection algorithm. The proposed method significantly improves the 3D reconstruction quality and allows for absolute 3D measurements in metric units using the multi-line scan setup. The performance of the proposed method is demonstrated on several representative real world examples.
In this paper, we introduce the multiple view depth generation method using heterogeneous cameras based on 3D reconstruction. The main goal of this research is to generate accurate depth images at each viewpoint of color cameras by using depth cameras placed at different positions. The conventional filter-based framework has critical problems such as truncated depth regions and mixed depth values. It degrades not only the quality of depth images but also synthesized intermediate views. A proposed framework is based on the 3D reconstruction method from the multiple depth cameras. The proposed system setup consists of two camera layers including four color cameras on a lower layer and two depth cameras on an upper layer as a parallel form. First, we estimate correct camera parameters using the camera calibration method on the offline process. In the online process, we capture synchronized color and depth images from the heterogeneous multiple camera system. Next, we generate 3D point clouds from 2D depth images and register them by the iterative closest points method. Then we can obtain an integrated 3D point cloud model. After that, we create the volumetric surface model from the sparse 3D point clouds by the truncated signed distance function. Finally, we can estimate the depth image at each color view by projecting the volumetric 3D model. In the experiment result and discussion section, we will verify not only the proposed framework resolves the aforementioned problems, but also has several advantages over the conventional framework