A Survey of Adversarial Attacks and Defenses for image data on Deep Learning

Huayu Li, Dmitry Namiot

Abstract


This article provides a detailed survey of the so-called adversarial attacks and defenses. These are special modifications to the input data of machine learning systems that are designed to cause machine learning systems to work incorrectly. The article discusses traditional approaches when the problem of constructing adversarial examples is considered as an optimization problem - the search for the minimum possible modifications of correlative data that ”deceive” the machine learning system. As tasks (goals) for adversarial attacks, classification systems are almost always considered. This corresponds, in practice, to the so-called critical systems (driverless vehicles, avionics, special applications, etc.). Attacks on such systems are obviously the most dangerous. In general, sensitivity to attacks means the lack of robustness of the machine (deep) learning system. It is robustness problems that are the main obstacle to the introduction of machine learning in the management of critical systems.


Full Text:

PDF

References


Krizhevsky Alex, Sutskever Ilya, Hinton Geoffrey E. Imagenet classification with deep convolutional neural networks // Advances in neural information processing systems. –– 2012. –– Vol. 25.

Rabiner Lawrence R. A tutorial on hidden markov models and selected applications in speech recognition //Proceedings of the IEEE. –– 1989. –– Vol. 77, no. 2. –– P. 257–286.

Graves Alex, Mohamed Abdel-rahman, Hinton Geoffrey. Speech recognition with deep recurrent neural networks // 2013 IEEE international conference on acoustics, speech and signal processing / Ieee. –– 2013. –– P. 6645–6649.

Efficient estimation of word representations in vector space / Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean // arXiv preprint arXiv:1301.3781. –– 2013.

End to end learning for self-driving cars / Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski et al. //arXiv preprint arXiv:1604.07316. –– 2016.

Deepface: Closing the gap to human-level performance in face verification / Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, Lior Wolf // Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. –– 2014. –– 09.

Namiot Dmitry, Ilyushin Eugene, Chizhov Ivan. Military applications of machine learning // International Journal of Open Information Technologies. –– 2021. –– Vol. 10, no. 1. –– P. 69–76.

Dixon Matthew F, Halperin Igor, Bilokon Paul. Machine learning in Finance. –– Springer, 2020. –– Vol. 1170.

Bhardwaj Rohan, Nambiar Ankita R., Dutta Debojyoti. A study of machine learning in healthcare. –– 2017. –– Vol. 2. –– P. 236–241.

Machine learning security: Threats, countermeasures, and evaluations / Mingfu Xue, Chengxiang Yuan, Heyi Wu et al. // IEEE Access. –– 2020. –– Vol. 8. –– P. 74720–74742.

Safety verification of deep neural networks / Xiaowei Huang, Marta Kwiatkowska, Sen Wang, Min Wu // International conference on computer aided verification / Springer. –– 2017. –– P. 3–29.

Intriguing properties of neural networks / Christian Szegedy, Wojciech Zaremba, Ilya Sutskever et al. // arXiv preprint arXiv:1312.6199. –– 2013.

GUO Lili, DING Shifei. Research progress on deep learning // Computer Science. –– 2015. –– Vol. 42(5). –– P. 28–33.

Wu Lei, Zhu Zhanxing. Towards understanding and improving the transferability of adversarial examples in deep neural networks // Proceedings of The 12th Asian Conference on Machine Learning / Ed. by

Sinno Jialin Pan, Masashi Sugiyama. –– Vol. 129 of Proceedings of Machine Learning Research. –– PMLR, 2020. –– 18–20 Nov. –– P. 837–850. –– URL: https: //proceedings.mlr.press/v129/wu20a.html.

Le Quoc V. Building high-level features using large scale unsupervised learning // 2013 IEEE international conference on acoustics, speech and signal processing / IEEE. –– 2013. –– P. 8595–8598.

Goodfellow Ian J, Shlens Jonathon, Szegedy Christian. Explaining and harnessing adversarial examples // arXiv preprint arXiv:1412.6572. –– 2014.

Going deeper with convolutions / Christian Szegedy, Wei Liu, Yangqing Jia et al. // Proceedings of the IEEE conference on computer vision and pattern recognition. –– 2015. –– P. 1–9.

Ensemble adversarial training: Attacks and defenses / Florian Tramèr, Alexey Kurakin, Nicolas Papernot et al. // arXiv preprint arXiv:1705.07204. –– 2017.

Boosting adversarial attacks with momentum / Yinpeng Dong, Fangzhou Liao, Tianyu Pang et al. // Proceedings of the IEEE conference on computer vision and pattern recognition. –– 2018. –– P. 9185–9193.

Kurakin Alexey, Goodfellow Ian, Bengio Samy et al. Adversarial examples in the physical world. –– 2016.

Rethinking the inception architecture for computer vision / Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe et al. // Proceedings of the IEEE conference on computer vision and pattern recognition. –– 2016. –– P. 2818–2826.

The limitations of deep learning in adversarial settings / Nicolas Papernot, Patrick McDaniel, Somesh Jha et al. // 2016 IEEE European symposium on security and privacy (EuroS&P) / IEEE. –– 2016. –– P. 372–387.

Moosavi-Dezfooli Seyed-Mohsen, Fawzi Alhussein, Frossard Pascal. Deepfool: a simple and accurate method to fool deep neural networks // Proceedings of the IEEE conference on computer vision and pattern recognition. –– 2016. –– P. 2574–2582.

Carlini Nicholas, Wagner David. Towards evaluating the robustness of neural networks // 2017 ieee symposium on security and privacy (sp) / IEEE. –– 2017. –– P. 39–57.

Universal adversarial perturbations / Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard // Proceedings of

the IEEE conference on computer vision and pattern recognition. –– 2017. –– P. 1765–1773.

Simonyan Karen, Zisserman Andrew. Very deep convolutional networks for large-scale image recognition // arXiv preprint arXiv:1409.1556. –– 2014.

Deep residual learning for image recognition / Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun //Proceedings of the IEEE conference on computer vision and pattern recognition. –– 2016. –– P. 770–778.

Mopuri Konda Reddy, Garg Utsav, Babu R Venkatesh. Fast feature fool: A data independent approach to universal adversarial perturbations // arXiv preprint arXiv:1707.05572. –– 2017.

Mopuri Konda Reddy, Ganeshan Aditya, Babu R Venkatesh. Generalizable data-free objective for crafting universal adversarial perturbations // IEEE transactions on pattern analysis and machine

intelligence. –– 2018. –– Vol. 41, no. 10. –– P. 2452– 2465.

Generating adversarial examples with adversarial networks / Chaowei Xiao, Bo Li, Jun-Yan Zhu et al. // arXiv preprint arXiv:1801.02610. –– 2018.

Generative adversarial nets / Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza et al. // Advances in neural information processing systems. –– 2014. –– Vol. 27.

Detecting adversarial samples from artifacts / Reuben Feinman, Ryan R Curtin, Saurabh Shintre, Andrew B Gardner // arXiv preprint arXiv:1703.00410. –– 2017.

On detecting adversarial perturbations / Jan Hendrik Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff // arXiv preprint arXiv:1702.04267. –– 2017.

Gong Zhitao, Wang Wenlu, Ku Wei-Shinn. Adversarial and clean data are not twins // arXiv preprint arXiv:1704.04960. –– 2017.

Xu Weilin, Evans David, Qi Yanjun. Feature squeezing: Detecting adversarial examples in deep neural networks // arXiv preprint arXiv:1704.01155. –– 2017.

Hinton Geoffrey, Vinyals Oriol, Dean Jeff. Distilling the knowledge in a neural network // arXiv preprint arXiv:1503.02531. –– 2015.

Distillation as a defense to adversarial perturbations against deep neural networks / Nicolas Papernot, Patrick McDaniel, Xi Wu et al. // 2016 IEEE symposium on security and privacy (SP) / IEEE. –– 2016. –– P. 582–597.

Ross Andrew Slavin, Doshi-Velez Finale. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients // Thirty-second AAAI conference on artificial intelligence. –– 2018.

Gu Shixiang, Rigazio Luca. Towards deep neural network architectures robust to adversarial examples // arXiv preprint arXiv:1412.5068. –– 2014.

Zantedeschi Valentina, Nicolae Maria-Irina, Rawat Ambrish. Efficient defenses against adversarial attacks // Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. –– 2017. –– P. 39–49.

Meng Dongyu, Chen Hao. Magnet: a two-pronged defense against adversarial examples // Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. –– 2017. –– P. 135–147.

Samangouei Pouya, Kabkab Maya, Chellappa Rama. Defense-gan: Protecting classifiers against adversarial attacks using generative models // arXiv preprint arXiv:1805.06605. –– 2018.

Improving the robustness of deep neural networks via stability training / Stephan Zheng, Yang Song, Thomas Leung, Ian Goodfellow // Proceedings of the ieee conference on computer vision and pattern recognition. –– 2016. –– P. 4480–4488.

Adversarial training for free! / Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi et al. // Advances in Neural Information Processing Systems. –– 2019. –– Vol. 32.

Wong Eric, Rice Leslie, Kolter J Zico. Fast is better than free: Revisiting adversarial training // arXiv preprint arXiv:2001.03994. –– 2020.

Bag of tricks for adversarial training / Tianyu Pang, Xiao Yang, Yinpeng Dong et al. // arXiv preprint arXiv:2010.00467. –– 2020.


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162