Over two million people in the United States rely on the use of a wheelchair to perform daily tasks. Joystick controls on motorized wheelchairs have improved the lifestyles of so many, but are of little value to the visually impaired or patients with restricted hand mobility. Often times these wheelchair users must rely on caretakers to assist them with their mobility, thus limiting their independence. Milpet is an effective access technology research platform centered around improving the quality of life of those confined to wheelchairs. By expanding Milpet's control interface to include speech recognition, those who cannot benefit from a joystick are given new freedoms. Utilizing a map of its environment, localization is performed using LiDAR sensor scans and a particle filtering technique. In addition to simple movement commands such as "turn left", "stop", and "go faster", the speech interface along with localization and navigation modules enable patients to navigate more complex commands. For example, commands such as "take me to the kitchen" instruct Milpet to autonomously drive to the specified location while avoiding walls and other obstacles. This self-driving wheelchair is a huge leap in improving the quality of life for the mobility impaired who cannot benefit from a joystick.
Modern day vehicles and especially driver assisted cars rely heavily on advanced sensors for navigation, localization and obstacle detection. Two of the most important sensors are the Inertial Measurement Unit and the Global Positioning System devices. The former is subject to wheel slippage and rough terrain, while the latter can be noisy and dependent on good satellite signals. The addition of camera sensors enables the usage of visual data for navigation tasks such as lane tracking and obstacle avoidance, localization tasks such as motion and pose estimation, and for general mapping and path planning. The proposed approach in this paper allows camera systems to work in conjunction with or replace both Inertial Measurement Unit and the Global Positioning System sensors. The proposed visual odometry and deep learning localization algorithms improve navigation and localization capabilities over current state-of-the-art methods. These algorithms can be used directly in today's advanced driver assistance systems, and take us one step closer towards full autonomy.