How we visually perceive non-emissive objects in our surrounding depends on the interaction of light with the optical characteristics of the materials that comprise them. The macroscopic surface roughness can also influence the appearance through shadowing and interreflections. In this work, we use a structured light scanner to estimate the surface structure of near-planar surfaces, namely of printing textiles. We compare our scans, both qualitatively and quantitatively, to those from a commercial highgrade profilometer based on the confocal principle. We achieve comparable results to the profilometer on samples with moderately complex surfaces. We discuss the possible reasons for errors in the scans of complex surfaces, thus providing guidelines for robust depth estimation. This comparison can help other researchers build more robust acquisition setups by understanding and minimizing the errors inherent to the reconstruction methods.
The estimated depth map provides valuable information in many computer vision applications such as autonomous driving, semantic segmentation and 3D object reconstruction. Since the light field camera capture both the spatial and angular light ray, we can estimate a depth map throughout that properties of light field image. However, estimating a depth map from the light field image has a limitation in term of short baseline and low resolution issues. Even though many approach have been developed, but they still have a clear flaw in computation cost and depth value accuracy. In this paper, we propose a network-based and epipolar plane image (EPI) light field depth estimation technique. Since the light field image consists of many sub-aperture images in a 2D spatial plane, we can stack the sub-aperture images in different directions to handle occlusion problem. However, usually used many light field subaperture images are not enough to construct huge datasets. To increase the number of sub-aperture images for stacking, we train the network with augmented light field datasets. In order to illustrate the effectiveness of our approach, we perform the extensive experimental evaluation through the synthetic and real light field scene. The experimental result outperforms the other depth estimation techniques.
Light field images are represented by capturing a densely projected rays from object to camera sensor. In this paper we propose a novel method for depth estimation from light field epipolar plane image (EPI). Contrary to the conventionally used depth estimation method such as stereo matching, the depth map is generated without high computational complexity via EPI. In order to obtain an accurate depth value from EPI, optimal angular value has to be founded for each pixels. If we consider all the angular candidate value for optimal angle value selection, that cause high computational complexity. Instead we consider all candidate value, reduce the angular candidate by analyzing the EPI patterns. In addition, to improve the quality of estimated depth map from EPI, occlusion area is handled before computing a matching cost. As a pre-processing, average and variance value are computed within specific window size to detect and replace the occlusion area. To validate the efficiency of our algorithm, we experiment with computer graphics and also dense light field data set. The experiment results show that our algorithm achieve better performance than conventionally used depth estimation methods.