Camouflage as adversarial attacks on machine learning models

Dmitry Prishletsov, Sergey Prishletsov, Dmitry Namiot

Abstract


The article is devoted to adversarial attacks on machine learning models. Such attacks are understood as the deliberate manipulation of data at different stages of the machine learning pipeline, designed to either prevent the correct operation of the model, or to achieve the desired result from it. In this case, physical evasion attacks were considered, that is, the objects themselves used in the model were modified. The article discusses the use of camouflaged images to deceive the recognition system. The experiments were carried out with a machine learning model that recognizes images of cars. Two types of camouflage patterns were used - classic camouflage and drawing an image of another car. In practice, such manipulations can be carried out using airbrushing. The work confirmed the successful implementation of such attacks, and obtained metrics characterizing the effectiveness of attacks, as well as the possibility of countering them using competitive training. All results are openly published, which makes it possible to use them as a software stand for testing other attacks and protection methods.


Full Text:

PDF (Russian)

References


Ilyushin, Eugene, Dmitry Namiot, and Ivan Chizhov. "Attacks on machine learning systems-common problems and methods." International Journal of Open Information Technologies 10.3 (2022): 17-22 (in Russian)

Namiot, Dmitry. "Schemes of attacks on machine learning models." International Journal of Open Information Technologies 11.5 (2023): 68-86. (in Russian)

Kostyumov, Vasily. "A survey and systematization of evasion attacks in computer vision." International Journal of Open Information Technologies 10.10 (2022): 11-20. (in Russian)

Morgulis, Nir, et al. "Fooling a real car with adversarial traffic signs." arXiv preprint arXiv:1907.00374 (2019).

Nassi, Ben, et al. "Phantom of the adas: Phantom attacks on driver-assistance systems." Cryptology ePrint Archive (2020).

Knitting an anti-surveillance jumper https://kddandco.com/2022/11/02/knitting-an-anti-surveillance-jumper/ Retrieved: Aug, 2023

Du, Andrew, et al. "Physical adversarial attacks on an aerial imagery object detector." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2022.

Adhikari, Ajaya, et al. "Adversarial patch camouflage against aerial detection." arXiv preprint arXiv:2008.13671 (2020).

Stanford Car Dataset https://www.kaggle.com/datasets/jutrera/stanford-car-dataset-by-classes-folder Retrieved: Aug, 2023

DEEPBEAR “Pytorch car classifier” https://www.kaggle.com/code/deepbear/pytorch-car-classifier-90-accuracy Retrieved: Aug, 2023

Gael Lederrey “camogen camouflage generator” https://github.com/glederrey/camogen Retrieved: Aug, 2023

Prishlecov S.E. “Fizicheskie ataki na sistemu klassifikacii izobrazhenij posredstvom nanesenija kamufljazha” https://github.com/sergiussrussia/resnet34_attacks Retrieved: Aug, 2023

Prishlecov D.E. “Ataka na sistemu klassifikacii izobrazhenij: mimikrija” https://gitlab.com/Pukuluka/adversarial-training Retrieved: Aug, 2023


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность IT Congress 2024

ISSN: 2307-8162