Back to articles
Articles
Volume: 32 | Article ID: art00012
Image
VRUNet: Multi-Task Learning Model for Intent Prediction of Vulnerable Road Users
  DOI :  10.2352/ISSN.2470-1173.2020.16.AVM-109  Published OnlineJanuary 2020
Abstract

Advanced perception and path planning are at the core for any self-driving vehicle. Autonomous vehicles need to understand the scene and intentions of other road users for safe motion planning. For urban use cases it is very important to perceive and predict the intentions of pedestrians, cyclists, scooters, etc., classified as vulnerable road users (VRU). Intent is a combination of pedestrian activities and long term trajectories defining their future motion. In this paper we propose a multi-task learning model to predict pedestrian actions, crossing intent and forecast their future path from video sequences. We have trained the model on naturalistic driving open-source JAAD [1] dataset, which is rich in behavioral annotations and real world scenarios. Experimental results show state-of-the-art performance on JAAD dataset and how we can benefit from jointly learning and predicting actions and trajectories using 2D human pose features and scene context.

Subject Areas :
Views 60
Downloads 12
 articleview.views 60
 articleview.downloads 12
  Cite this article 

Adithya Ranga, Filippo Giruzzi, Jagdish Bhanushali, Emilie Wirbel, Patrick Pérez, Tuan-Hung Vu, Xavier Perotton, "VRUNet: Multi-Task Learning Model for Intent Prediction of Vulnerable Road Usersin Proc. IS&T Int’l. Symp. on Electronic Imaging: Autonomous Vehicles and Machines,  2020,  pp 109-1 - 109-10,  https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-109

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2020
72010604
Electronic Imaging
2470-1173
Society for Imaging Science and Technology
7003 Kilworth Lane, Springfield, VA 22151 USA