As depth imaging is integrated into more and more consumer devices, manufacturers have to tackle new challenges. Applica- tions such as computational bokeh and augmented reality require dense and precisely segmented depth maps to achieve good re- sults. Modern devices use a multitude
of different technologies to estimate depth maps, such as time-of-flight sensors, stereoscopic cameras, structured light sensors, phase-detect pixels or a com- bination thereof. Therefore, there is a need to evaluate the quality of the depth maps, regardless of the technology used to produce
them. The aim of our work is to propose an end-result evalua- tion method based on a single scene, using a specifically designed chart. We consider the depth maps embedded in the photographs, which are not visible to the user but are used by specialized soft- ware, in association with the
RGB pictures. Some of the aspects considered are spatial alignment between RGB and depth, depth consistency, and robustness to texture variations. This work also provides a comparison of perceptual and automatic evaluations.