We present a real-time light detection and ranging (LIDAR) imaging by developing a single-chip solid-state beam scanner. The beam scanner is integrated with a fully functional 32-channel optical phased array, 36 optical amplifiers, and a tunable laser at central wavelength ~1310 nm, all on a 7.5 x 3 mm^2 single chip fabricated with III-V on silicon processes. The phased array is calibrated with self-evolving genetic algorithm to enable beam forming and steering in two dimensions. Distance measurement is performed with a digital signal processing that measures the time of flight (TOF) of pulsed light with a system consisting of an avalanche photodiode (APD), trans-impedance amplifier (TIA), analog-digital converter (ADC), and a processor. The LIDAR module utilizing this system can acquire point cloud images with 120 x 20 resolution with a speed of 20 frames per seconds at a distance up to 20 meters. This work presents the first demonstration of a chip-scale LIDAR solution without any moving part or bulk external light source or amplifier, making an ultra-low cost and compact LIDAR technology a reality.
Grid mapping is widely used to represent the environment surrounding a car or a robot for autonomous navigation. This paper describes an algorithm for evidential occupancy grid (OG) mapping that fuses measurements from different sensors, based on the Dempster-Shafer theory, and is intended for scenes with stationary and moving (dynamic) objects. Conventional OGmapping algorithms tend to struggle in the presence of moving objects because they do not explicitly distinguish between moving and stationary objects. In contrast, evidential OG mapping allows for dynamic and ambiguous states (e.g. a LIDAR measurement: cannot differentiate between moving and stationary objects) that are more aligned with measurements made by sensors. In this paper, we present a framework for fusing measurements as they are received from disparate sensors (e.g. radar, camera and LIDAR) using evidential grid mapping. With this approach, we can form a live map of the environment, and also alleviate the problem of having to synchronize sensors in time. We also designed a new inverse sensor model for radar that allows us to extract more information from object level measurements, by incorporating knowledge of the sensor’s characteristics. We have implemented our algorithm in the OpenVX framework to enable seamless integration into embedded platforms. Test results show compelling performance especially in the presence of moving objects.
Automatic tools for plant phenotyping have received increased interest in recent years due to the need to understand the relationship between plant genotype and phenotype. Building upon our previous work, we present a robust, deep learning method to accurately estimate the height of biomass sorghum throughout the entirety of its growing season. We mount a vertically oriented LiDAR sensor onboard an agricultural robot to obtain 3D point clouds of the crop fields. From each of these 3D point clouds, we generate a height contour and density map corresponding to a single row of plants in the field. We then train a multiview neural network in order to estimate plant height. Our method is capable of accurately estimating height from emergence through canopy closure. We extensively validate our algorithm by performing several ground truthing campaigns on biomass sorghum. We have shown our proposed approach to achieve an absolute height estimation error of 7.47% using ground truth data obtained via conventional breeder methods on 2715 plots of sorghum with varying genetic strains and treatments.
Efficient plant phenotyping methods are necessary to accelerate the development of high yield biofuel crops. Manual measurement of plant phenotypes, such as height is inefficient, labor intensive and error prone. We present a robust, LiDAR based approach to estimate the height of biomass sorghum plants. A vertically oriented laser rangefinder onboard an agricultural robot captures LiDAR scans of the environment as the robot traverses between crop rows. These LiDAR scans are used to generate height contours for a single row of plants corresponding to a given genetic strain. We apply ground segmentation, iterative peak detection and peak filtering to estimate the average height of each row. Our LiDAR based approach is capable of estimating height at all stages of the growing period, from emergence e.g. 10 cm through canopy closure e.g. 4 m. Our algorithm has been extensively validated by several ground truthing campaigns on biomass sorghum. These measurements encompass typical methods employed by breeders as well as higher accuracy methods of measurement. We are able to achieve an absolute height estimation error of 8.46% ground truthed via ?by-eye? method over 2842 plots, an absolute height estimation error of 5.65% ground truthed at high granularity by agronomists over 12 plots, and an absolute height estimation error of 7.2% when ground truthed by multiple agronomists over 12 plots.
With the mandatory introduction of the May 2011 directive for reassessment of bridges in Germany, the administrations of the federal and state governments have the duty to prove the stability of their bridge stock. Verification of bridge stability will be realized with consideration of the newly increased traffic loads. Particularly in older bridges, the verification can only be achieved if calculative surplus load capacity of the original structural design is taken into account in the recalculation. One option for considering these reserves is the exact determination of the dead weight of the bridge. Within this case study, it will be demonstrated how the problem can be practically solved. In order to determine the dead weight of a concrete bridge, its volume has to be calculated. as a first step, a 3D laser scanner is used to record the internal geometry of a hollow box bridge girder. For the determination of the thickness of the concrete member, the non-destructive technique ultrasonic echo is applied. The construction must be segmented in approximately equidistant parts in order to be able to carry out an economic and efficient investigation. The description of the segmentation of the point cloud, carried out in a 2D model, was presented in the first part of the publication. The subject of this presentation is the merging of 2D cross sections into a 3D model, from which the weight of the bridge can be calculated.