The objective of this paper is to research a dynamic computation of Zero-Parallax-Setting (ZPS) for multi-view autostereoscopic displays in order to effectively alleviate blurry 3D vision for images with large disparity. Saliency detection techniques can yield saliency map which is a topographic representation of saliency which refers to visually dominant locations. By using saliency map, we can predict what attracts the attention, or region of interest, to viewers. Recently, deep learning techniques have been applied in saliency detection. Deep learning-based salient object detection methods have the advantage of highlighting most of the salient objects. With the help of depth map, the spatial distribution of salient objects can be computed. In this paper, we will compare two dynamic ZPS techniques based on visual attention. They are 1) maximum saliency computation by Graphic-Based Visual Saliency (GBVS) algorithm and 2) spatial distribution of salient objects by a convolutional neural networks (CNN)-based model. Experiments prove that both methods can help improve the 3D effect of autostereoscopic displays. Moreover, the spatial distribution of salient objects-based dynamic ZPS technique can achieve better 3D performance than maximum saliency-based method.
With the provision of motion parallax and viewing convenience, multi-view autostereoscopic displays have been popular in recent years. Obviously, increasing the number of views improves the quality of 3D images/videos and produces to better motion parallax. The tradeoff is the larger number of view images required to generate in real time leading to the need of the huge amount of computing resources for the systems. In fact, people often focuses on the distinctive objects in a scene. For achieving the same level of motion parallax, it can use more views to present distinctive objects and less views for the rest. As a result, fewer computing resources are required for rendering multi-view images. With exploiting this principle, a new multi-view rendering scheme based on visual saliency is proposed for the application of autostereoscopic displays. The new method uses saliency maps to extract distinctive regions with different saliency level in a scene and dynamically control the number of views to generate for them. Points in the distinctive regions with high saliency use more views, while points in the regions with low saliency use less views. By controlling the number of views in use for different salient regions, the proposed scheme can maintain low computation complexity without causing significant degradation in 3D experience. In this paper, a 2D+Z format based multi-view rendering system with the use of saliency maps is presented to illustrate the feasibility of the new scheme. Subjective assessment results demonstrate that the saliency map based multi-view system has slight degradation in 3D performance compared with true 28-view system and achieves 55% reduction in computation complexity.