Towards the actualization of an air bag system for the UAV's crash, this paper proposes a method for estimating the UAV's crash site from the video sequence acquired by the camera attached to the UAV. The crash point can be considered to correspond to the divergence point of the optical flows. In the accumulator, the cells at which the optical flows (straight lines) pass through are incremented by one. After performing this process for all the optical flows, the cell with the largest vote is obtained as the crash point (divergence point) in the image plane. Experiments using a hand held camera show that the accuracy of estimating the crash site is increased as the camera approaches the target plane. Overall, the experimental results are promising.
3D scene reconstruction using RGB-D camera-based Simultaneous Localization and Mapping (SLAM) is constantly studied today. KinectFusion, GPU-based real-time 3D scene reconstruction framework, is mainly used for many other algorithms of RGB-D SLAM. One of the main limitation of KinectFusion depends only on geometric information in the camera pose estimation process. In this paper, we utilize both geometric and photometric information for point cloud alignment. To extract photometric information in color image, we combine local and global optical flow method, such as Lucas-Kanade and Horn-Schunck, respectively, and make not only dense but also robust to noise flow field. In experimental results, we show that our method can use dense and accurate photometric information.
Light-field cameras capture 4-dimensional spatio-angular information of the light field. They provide more helpful multiple viewpoints or sub-apertures for visual analysis and visual understanding than traditional cameras. Optical flow is a common method to get scene structure cues from two images, however, subpixel displacements and occlusions are two inevitable challenges in the optical flow estimation from light-field sub-apertures. In this paper, we develop a light-field flow model, and propose an edge-aware light-field flow estimation framework for joint depth estimation and occlusion detection. It consists of three steps: i) An optical flow volume with sub-pixel accuracy is extracted from sub-apertures by edge-preserving interpolation. Then occlusion regions are detected through consistency checking. ii) Robust light-field flow and depth estimation are initialized by a winner-take-all strategy and a weighted voting mechanism. iii) Final depth map is refined by a weighted median filter based on guided filter. Experimental results demonstrate the effectiveness and robustness of our method.