In recent years, deep neural networks (DNNs) have accomplished impressive success in various applications, including autonomous driving perception tasks. However, current deep neural networks are easily deceived by adversarial attacks. This vulnerability raises significant concerns, particularly in safety-critical applications. As a result, research into attacking and defending DNNs has gained much coverage. In this work, detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. The experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all others, in addition to inspecting the effect of applying a simple defense method. We conclude this paper by comparing and discussing the experimental results, proposing insights and future work. The visualizations of the attacks are available at
Ibrahim Sobh, Ahmed Hamed, Varun Ravi Kumar, Senthil Yogamani, "Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving" in Journal of Imaging Science and Technology, 2021, pp 060408-1 - 060408-9, https://doi.org/10.2352/J.ImagingSci.Technol.2021.65.6.060408