Back to articles
JIST-first
Volume: 34 | Article ID: AVM-146
Image
Adversarial attacks on multi-task visual perception for autonomous driving (JIST-first)
  DOI :  10.2352/J.ImagingSci.Technol.2021.65.6.060408  Published OnlineNovember 2021
Abstract
Abstract

In recent years, deep neural networks (DNNs) have accomplished impressive success in various applications, including autonomous driving perception tasks. On the other hand, current deep neural networks are easily fooled by adversarial attacks. This vulnerability raises significant concerns, particularly in safety-critical applications. As a result, research into attacking and defending DNNs has gained much coverage. In this work, detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. The experiments consider both white and black box attacks for targeted and un-targeted cases while attacking a task and inspecting the effect on all the others, in addition to inspecting the effect of applying a simple defense method. We conclude this paper by comparing and discussing the experimental results, proposing insights and future work. The visualizations of the attacks are available at https://drive.google.com/file/d/1NKhCL2uC_SKam3H05SqjKNDE_zgvwQS-/view?usp=sharing

Subject Areas :
Views 5
Downloads 0
 articleview.views 5
 articleview.downloads 0
  Cite this article 

Ibrahim Sobh, Ahmed Hamed, Varun Ravi Kumar, Senthil Yogamani, "Adversarial attacks on multi-task visual perception for autonomous driving (JIST-first)in Proc. IS&T Int’l. Symp. on Electronic Imaging: Autonomous Vehicles and Machines,  2021,  pp - ,  https://doi.org/10.2352/J.ImagingSci.Technol.2021.65.6.060408

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2021
ei
Electronic Imaging
2470-1173
2470-1173
Society for Imaging Science and Technology
IS&T 7003 Kilworth Lane, Springfield, VA 22151 USA