Modern day vehicles and especially driver assisted cars rely heavily on advanced sensors for navigation, localization and obstacle detection. Two of the most important sensors are the Inertial Measurement Unit and the Global Positioning System devices. The former is subject to wheel slippage and rough terrain, while the latter can be noisy and dependent on good satellite signals. The addition of camera sensors enables the usage of visual data for navigation tasks such as lane tracking and obstacle avoidance, localization tasks such as motion and pose estimation, and for general mapping and path planning. The proposed approach in this paper allows camera systems to work in conjunction with or replace both Inertial Measurement Unit and the Global Positioning System sensors. The proposed visual odometry and deep learning localization algorithms improve navigation and localization capabilities over current state-of-the-art methods. These algorithms can be used directly in today's advanced driver assistance systems, and take us one step closer towards full autonomy.
As most robot navigation systems for large-scale outdoor applications have been implemented based on high-end sensors, it is still challenging to implement a low-cost autonomous groundbased vehicle. This paper presents an autonomous navigation system using only a stereo camera and a low-cost GPS receiver. The proposed method consists of Visual Odometry (VO), pose estimation, obstacle detection, local path planning and a waypoint follower. VO computes a relative pose between two pairs of stereo images. However, VO inevitably suffers from drift (error accumulation) over time. A low-cost GPS provides absolute locations that can be used to correct VO drift. We fuse data from VO and GPS to achieve more accurate localization both locally and globally, using an Extended Kalman Filter (EKF). To detect obstacles, we use a dense depth map that is generated by stereo disparity estimation and transformed into a 2D occupancy grid map. Local path planning computes temporary waypoints to avoid obstacles, and a waypoint follower navigates the robot towards the goal point. We evaluated the proposed method with a mobile robot platform in real-time experiments in an outdoor environment. Experimental results show that the mobile vision and control system is capable of traversing roads in this outdoor environment autonomously.