In this paper, we introduce the multiple view depth generation method using heterogeneous cameras based on 3D reconstruction. The main goal of this research is to generate accurate depth images at each viewpoint of color cameras by using depth cameras placed at different positions. The conventional filter-based framework has critical problems such as truncated depth regions and mixed depth values. It degrades not only the quality of depth images but also synthesized intermediate views. A proposed framework is based on the 3D reconstruction method from the multiple depth cameras. The proposed system setup consists of two camera layers including four color cameras on a lower layer and two depth cameras on an upper layer as a parallel form. First, we estimate correct camera parameters using the camera calibration method on the offline process. In the online process, we capture synchronized color and depth images from the heterogeneous multiple camera system. Next, we generate 3D point clouds from 2D depth images and register them by the iterative closest points method. Then we can obtain an integrated 3D point cloud model. After that, we create the volumetric surface model from the sparse 3D point clouds by the truncated signed distance function. Finally, we can estimate the depth image at each color view by projecting the volumetric 3D model. In the experiment result and discussion section, we will verify not only the proposed framework resolves the aforementioned problems, but also has several advantages over the conventional framework