In this paper, we introduce the multiple view depth generation method using heterogeneous cameras based on 3D reconstruction. The main goal of this research is to generate accurate depth images at each viewpoint of color cameras by using depth cameras placed at different positions. The conventional filter-based framework has critical problems such as truncated depth regions and mixed depth values. It degrades not only the quality of depth images but also synthesized intermediate views. A proposed framework is based on the 3D reconstruction method from the multiple depth cameras. The proposed system setup consists of two camera layers including four color cameras on a lower layer and two depth cameras on an upper layer as a parallel form. First, we estimate correct camera parameters using the camera calibration method on the offline process. In the online process, we capture synchronized color and depth images from the heterogeneous multiple camera system. Next, we generate 3D point clouds from 2D depth images and register them by the iterative closest points method. Then we can obtain an integrated 3D point cloud model. After that, we create the volumetric surface model from the sparse 3D point clouds by the truncated signed distance function. Finally, we can estimate the depth image at each color view by projecting the volumetric 3D model. In the experiment result and discussion section, we will verify not only the proposed framework resolves the aforementioned problems, but also has several advantages over the conventional framework
In this paper, we present a new real-time depth estimation method using the stereo color camera and the ToF depth sensor. First, we obtain the initial depth information from the ToF depth sensor. Exploiting the initial depth information to narrow the disparity range by performing 3-D warping from the position of the ToF camera to the position of the stereo camera due to accelerating the algorithm. We construct the cost volume by calculating intensity difference and truncated absolute difference of gradients. After narrowing the disparity range, we aggregate the cost volume. Experimental results show that the proposed method can represent the disparity detail and improve the quality in the vulnerable areas of stereo matching.