Modern day vehicles and especially driver assisted cars rely heavily on advanced sensors for navigation, localization and obstacle detection. Two of the most important sensors are the Inertial Measurement Unit and the Global Positioning System devices. The former is subject to wheel slippage and rough terrain, while the latter can be noisy and dependent on good satellite signals. The addition of camera sensors enables the usage of visual data for navigation tasks such as lane tracking and obstacle avoidance, localization tasks such as motion and pose estimation, and for general mapping and path planning. The proposed approach in this paper allows camera systems to work in conjunction with or replace both Inertial Measurement Unit and the Global Positioning System sensors. The proposed visual odometry and deep learning localization algorithms improve navigation and localization capabilities over current state-of-the-art methods. These algorithms can be used directly in today's advanced driver assistance systems, and take us one step closer towards full autonomy.
Suvam Bag, Vishwas Venkatachalapathy, Raymond W. Ptucha, "Motion Estimation Using Visual Odometry and Deep Learning Localization" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Autonomous Vehicles and Machines, 2017, pp 62 - 69, https://doi.org/10.2352/ISSN.2470-1173.2017.19.AVM-022