In this paper, we propose a novel hole filling method in view synthesis by using deep convolutional neural networks (DCNN). The hole filling networks are learned by end-to-end mapping between hole regions and ground truth images. Hole regions are initially filled with background
information. Subsequently hole filling networks restore high quality of hole filling results. The proposed hole filling networks consist of three layers, which are patch and feature extraction layer, non-linear mapping layer, and restoration layer. Experimental results demonstrate that the
proposed DCNN-based hole filling method is able to significantly improve hole filling performance, compared to conventional hole filling methods. Furthermore, responses of filters learned by proposed DCNN show that the proposed hole filling framework could provide visually plausible image
structures and textures to hole regions.