Back to articles
Articles
Volume: 33 | Article ID: art00011
Image
Attribution of Gradient Based Adversarial Attacks for Reverse Engineering of Deceptions
  DOI :  10.2352/ISSN.2470-1173.2021.4.MWSF-300  Published OnlineJanuary 2021
Abstract

Machine Learning (ML) algorithms are susceptible to adversarial attacks and deception both during training and deployment. Automatic reverse engineering of the toolchains behind these adversarial machine learning attacks will aid in recovering the tools and processes used in these attacks. In this paper, we present two techniques that support automated identification and attribution of adversarial ML attack toolchains using Co-occurrence Pixel statistics and Laplacian Residuals. Our experiments show that the proposed techniques can identify parameters used to generate adversarial samples. To the best of our knowledge, this is the first approach to attribute gradient based adversarial attacks and estimate their parameters. Source code and data is available at: <ext-link ext-link-type="url" xlink:href="https://github.com/michael-goebel/ei_red">https://github.com/michael-goebel/ei_red</ext-link>.

Subject Areas :
Views 20
Downloads 0
 articleview.views 20
 articleview.downloads 0
  Cite this article 

Michael Goebel, Jason Bunk, Srinjoy Chattopadhyay, Lakshmanan Nataraj, Shivkumar Chandrasekaran, B. S. Manjunath, "Attribution of Gradient Based Adversarial Attacks for Reverse Engineering of Deceptionsin Proc. IS&T Int’l. Symp. on Electronic Imaging: Media Watermarking, Security, and Forensics,  2021,  pp 300-1 - 300-7,  https://doi.org/10.2352/ISSN.2470-1173.2021.4.MWSF-300

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2021
72010604
Electronic Imaging
2470-1173
Society for Imaging Science and Technology
7003 Kilworth Lane, Springfield, VA 22151 USA