We present a visual SLAM pipeline that is efficient, robust and accurate. It is applied to the trained parking use case. In this case the SLAM algorithm builds a "trained" map on the first pass, typically driven by the driver. In subsequent passes the algorithm localizes to the trajectory, thus allowing the vehicle to autonomously follow the trained path. A visual SLAM system for autonomous vehicles is an attractive option as it utilizes relatively cheap sensors that are typically already mounted on the vehicle for other tasks. However using a visual SLAM approach has challenges, in this paper we specifically look at the localization task in difficult cases. The system is designed to operate in an uncontrolled environment. Between map generation and localization there may be significant changes, different dynamic objects, missing structure, moved structure or the scene may be visually different due to illumination changes or changing weather conditions. These are the so called hard cases. We present an approach, which runs in real-time, designed to tackle the hard cases. The approach has been evaluated both at the bench and in-car.
Catherine Enright, "Visual SLAM and Localization – The Hard Cases" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Autonomous Vehicles and Machines, 2018, pp 281-1 - 281-5, https://doi.org/10.2352/ISSN.2470-1173.2018.17.AVM-281