Taking the Jurong Dongshan River as an illustrative case, we employed a Dajiang Phantom 4 RTK SE unmanned aerial vehicle (UAV) for single-lens tilt photography. Our investigation focused on examining the influence of varying flight altitudes (FA: 30 m, 60 m and 90 m) and the configuration of photo control points on two-dimentional (2D) or three-dimensional (3D) mapping accuracy of river embankments, slopes, and hydraulic structures, as well as the analysis of outcomes of 2D and 3D modeling under different FA conditions augmented with supplementary photographs. Our observations revealed that the longitude and elevation accuracy at 30 m FA were higher without photo control operations compared to those at 60 m and 90 m FA, and accuracy diminished as FA increased. Specifically, the longitude accuracy of the embankment photo control points exceeded that of the slope photo control points, whereas the elevation accuracy of the embankment photo control points was superior at FAs of 60 m and 90 m. The geographical location deviation of hydraulic structures (irrigation intake gates) in the 2D model was larger than that obtained in the 3D model. Notably, the incorporation of additional detailed photographs significantly augmented the modeling efficacy of UAV aerial survey data, especially in capturing intricate plant and slope details. It is recommended that the Phantom 4 RTK SE be used at FA of 90 m to establish a foundational channel model, along with capture of additional detailed photographs of crucial structures, slopes, etc., and obtaining geographic location information in 3D Models.
Unmanned Aerial Vehicles (UAVs) gain popularity in a wide range of civilian and military applications. Such emerging interest is pushing the development of effective collision avoidance systems which are especially crucial in a crowded airspace setting. Because of cost and weight limitations associated with UAVs' payload, the optical sensors, simply digital cameras, are widely used for collision avoidance systems in UAVs. This requires moving object detection and tracking algorithms from a video, which can be run on board efficiently. In this paper, we present a new approach to detect and track UAVs from a single camera mounted on a different UAV. Initially, we estimate background motions via a perspective transformation model and then identify moving object candidates in the background subtracted image through deep learning classifier trained on manually labeled datasets. For each moving object candidates, we find spatio-temporal traits through optical flow matching and then prune them based on their motion patterns compared with the background. Kalman filter is applied on pruned moving objects to improve temporal consistency among the candidate detections. The algorithm was validated on video datasets taken from a UAV. Results demonstrate that our algorithm can effectively detect and track small UAVs with limited computing resources.
ERL Emergency is an outdoor multi-domain robotic competition inspired by the 2011 Fukushima accident. The ERL Emergency Challenge requires teams of land, underwater and flying robots to work together to survey the scene, collect environmental data, and identify critical hazards. To prepare teams for this multidisciplinary task a series of summer schools and workshops have been arranged. In this paper the challenges and hands-on results of bringing students and researchers collaborating successfully in unknown environments and in new research areas are explained. As a case study results from the euRathlon/SHERPA workshop 2015 in Oulu are given.