As depth imaging is integrated into more and more consumer devices, manufacturers have to tackle new challenges. Applica- tions such as computational bokeh and augmented reality require dense and precisely segmented depth maps to achieve good re- sults. Modern devices use a multitude of different technologies to estimate depth maps, such as time-of-flight sensors, stereoscopic cameras, structured light sensors, phase-detect pixels or a com- bination thereof. Therefore, there is a need to evaluate the quality of the depth maps, regardless of the technology used to produce them. The aim of our work is to propose an end-result evalua- tion method based on a single scene, using a specifically designed chart. We consider the depth maps embedded in the photographs, which are not visible to the user but are used by specialized soft- ware, in association with the RGB pictures. Some of the aspects considered are spatial alignment between RGB and depth, depth consistency, and robustness to texture variations. This work also provides a comparison of perceptual and automatic evaluations.
The 360° images or regular 2D images look appealing in Virtual Reality yet they fail to represent depth and how the depth can be used to give an experience to the user from two dimensional images. We proposed an approach for creating stereogram from computer generated depth map using approximation algorithm and later use these stereo pairs for giving a complete experience on VR along with forward and backward navigation using mobile sensors. Firstly the image is being segmented into two images from which we generated our disparity map and afterwards generate the depth image from it. After the creation of the depth image, stereo pair which is the left and right image for the eyes were created. Acquired image from the previous process then handled by Cardboard SDK for VR support used in the Android devices using Google Cardboard headset. With the VR image in the stereoscopic device, we use the accelerometer sensor of the device to determine the movement of the device while head mounted. Unlike the other VR navigation systems offered (HTC Vibe, Oculus) using external sensors, our approach is to use the built-in sensors for motion processing. Using the accelerometer reading from the movement, the user will be able to move around virtually in the constructed image. The results of this experiment are the visual changes of the image displayed in VR according to the viewer’s physical movement.