3D cameras that can capture range information, in addition to color information, are increasingly prevalent in the consumer marketplace and available in many consumer mobile imaging platforms. An interesting and important application enabled by 3D cameras is photogrammetry, where the physical distance between points can be computed using captured imagery. However, for consumer photogrammetry to succeed in the marketplace, it needs to meet the accuracy and consistency expectations of users in the real world and perform well under challenging lighting conditions, varying distances of the object from the camera etc. These requirements are exceedingly difficult to meet due to the noisy nature of range data, especially when passive stereo or multi-camera systems are used for range estimation. We present a novel and robust algorithm for point-to-point 3D measurement using range camera systems in this paper. Our algorithm utilizes the intuition that users often specify end points of an object of interest for measurement and that the line connecting the two points also belong to the same object. We analyze the 3D structure of the points along this line using robust PCA and improve measurement accuracy by fitting the endpoints to this model prior to measurement computation. We also handle situations where users attempt to measure a gap such as the arms of a sofa, width of a doorway etc. which violates our assumption. Finally, we test the performance of our proposed algorithm on a dataset of over 1800 measurements collected by humans on the Dell Venue 8 tablet with Intel RealSense Snapshot technology. Our results show significant improvements in both accuracy and consistency of measurement, which is critical in making consumer photogrammetry a reality in the marketplace.
Research on the role of human stereopsis has largely focused on laboratory studies that control or eliminate other cues to depth. However, in everyday environments we rarely rely on a single source of depth information. Despite this, few studies have assessed the impact of binocular vision on depth judgements in real-world scenarios presented in simulation. Here we conducted a series of experiments to determine if, and to what extent, stereoscopic depth provides a benefit for tasks commonly performed by helicopter aircrew. We assessed the impact of binocular vision and stereopsis on perception of (1) relative and (2) absolute distance above the ground (altitude) using natural and simulated stereoscopic-3D (S3D) imagery. The results showed that, consistent with the literature, binocular vision provides very weak input to absolute altitude estimates at high altitudes (10-100ft). In contrast, estimates of relative altitude at low altitudes (0-5ft) were critically dependent on stereopsis, irrespective of terrain type. These findings are consistent with the view that stereopsis provides important information for altitude judgments when close to the ground; while at high altitudes these judgments are based primarily on the perception of 2D cues.