On Adversarial Attacks for Autonomous Vehicles

Dmitry Namiot, Vasily Kupriyanovsky, Alexey Pichugov


This article examines adversarial attacks against machine (deep) learning models used in autonomous vehicles. Artificial intelligence (machine learning) systems play a decisive role in the functioning of unmanned vehicles. At the same time, all machine learning systems are susceptible to so-called adversarial attacks, when an attacker deliberately modifies data in such a way as to deceive the algorithms of such systems, complicate their work (reduce the quality of work), or achieve the behavior desired by the attacker. Adversarial attacks are a big problem for machine learning systems, especially when used in critical areas such as automated driving. Adversarial attacks pose a problem for functional testing - there is data on which the system does not work correctly (does not work at all, works with low quality). For autonomous vehicle systems, such attacks can be carried out in the physical form, when real objects captured by the vehicle’s sensors are modified, dummy objects are created, etc. This article provides an overview of adversarial attacks on autonomous vehicles, focusing specifically on physical attacks.

Full Text:

PDF (Russian)


NIST AI 100-2 E2023 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. https://csrc.nist.gov/pubs/ai/100/2/e2023/final Retrieved: May, 2024

Shibli, Ashfak Md, Mir Mehedi A. Pritom, and Maanak Gupta. "AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns." arXiv preprint arXiv:2402.09728 (2024).

Namiot, Dmitry. "Schemes of attacks on machine learning models." International Journal of Open Information Technologies 11.5 (2023): 68-86.

Bidzhiev, T. M., and D. E. Namiot. "Attacks on Machine Learning Models Based on the PyTorch Framework." Avtomatika i telemehanika 3 (2024): 38-50.

Namiot, Dmitry, and Eugene Ilyushin. "Trusted Artificial Intelligence Platforms: Certification and Audit." International Journal of Open Information Technologies 12.1 (2024): 43-60.

Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "Ongoing academic and industrial projects dedicated to robust machine learning." International Journal of Open Information Technologies 9.10 (2021): 35-46. (in Russian)

Namiot, D. E., E. A. Il'jushin, and I. V. Chizhov. "Osnovanija dlja rabot po ustojchivomu mashinnomu obucheniju." International Journal of Open Information Technologies 9.11 (2021): 68-74.

Ilyushin, Eugene, Dmitry Namiot, and Ivan Chizhov. "Attacks on machine learning systems-common problems and methods." International Journal of Open Information Technologies 10.3 (2022): 17-22.

Namiot, Dmitry. "Introduction to Data Poison Attacks on Machine Learning Models." International Journal of Open Information Technologies 11.3 (2023): 58-68.

Namiot, D. E., E. A. Il'jushin, and O. G. Pilipenko. "Doverennye platformy iskusstvennogo intellekta." International Journal of Open Information Technologies 10.7 (2022): 119-127.

Song, Junzhe, and Dmitry Namiot. "On Real-Time Model Inversion Attacks Detection." International Conference on Distributed Computer and Communication Networks. Cham: Springer Nature Switzerland, 2023.

Ribeiro, Mauro, Katarina Grolinger, and Miriam AM Capretz. "Mlaas: Machine learning as a service." 2015 IEEE 14th international conference on machine learning and applications (ICMLA). IEEE, 2015.

Sedar, Roshan, et al. "A comprehensive survey of v2x cybersecurity mechanisms and future research paths." IEEE Open Journal of the Communications Society 4 (2023): 325-391.

M. Uzair, "Who is liable when a driverless car crashes?", World Electric Veh. J., vol. 12, no. 2, 2021, [online] Available: https://www.mdpi.com/2032-6653/12/2/62.

P. Penmetsa, P. Sheinidashtegol, A. Musaev, E. K. Adanu and M. Hudnall, "Effects of the autonomous vehicle crashes on public perception of the technology", IATSS Res., vol. 45, no. 4, pp. 485-492, 2021, [online] Available:


M. Girdhar, J. Hong and J. Moore, "Cybersecurity of Autonomous Vehicles: A Systematic Literature Review of Adversarial Attacks and Defense Models," in IEEE Open Journal of Vehicular Technology, vol. 4, pp. 417-437, 2023, doi: 10.1109/OJVT.2023.3265363.

Tesla Autopilot feature was involved in 13 fatal crashes, US regulator says https://www.theguardian.com/technology/2024/apr/26/tesla-autopilot-fatal-crash Retrieved: Jun, 2024

G. Costantino and I. Matteucci, "Reversing Kia motors head unit to discover and exploit software vulnerabilities", J. Comput. Virol. Hacking Techn., vol. 19, pp. 33-49, 2022.

Costantino, Gianpiero, Marco De Vincenzi, and Ilaria Matteucci. "A vehicle firmware security vulnerability: an IVI exploitation." Journal of Computer Virology and Hacking Techniques (2024): 1-16.

Elkhail, Abdulrahman Abu, et al. "Vehicle security: A survey of security issues and vulnerabilities, malware attacks and defenses." IEEE Access 9 (2021): 162401-162437.

Pham, Minh, and Kaiqi Xiong. "A survey on security attacks and defense techniques for connected and autonomous vehicles." Computers & Security 109 (2021): 102269.

Ren, Huali, Teng Huang, and Hongyang Yan. "Adversarial examples: attacks and defenses in the physical world." International Journal of Machine Learning and Cybernetics 12.11 (2021): 3325-3336.

Kurakin, Alexey, Ian J. Goodfellow, and Samy Bengio. "Adversarial examples in the physical world." Artificial intelligence safety and security. Chapman and Hall/CRC, 2018. 99-112.

CIRCUMVENT FACIAL RECOGNITION WITH YARN https://hackaday.com/2023/04/16/circumvent-facial-recognition-with-yarn/ Retrieved: Jun, 2024

Nassi, Ben, et al. "Phantom of the adas: Phantom attacks on driver assistance systems." Cryptology ePrint Archive (2020)

«Mne v kabinu stuchat — ty normal'naja?» Voditeli tramvaev zhalujutsja na avtopilot, im otvechajut statistikoj https://www.fontanka.ru/2024/03/14/73330787/ Retrieved: Jun, 2024

Hamdi, Mustafa Maad, et al. "A review on various security attacks in vehicular ad hoc networks." Bulletin of Electrical Engineering and Informatics 10.5 (2021): 2627-2635.

Zhou, Husheng, et al. "Deepbillboard: Systematic physical-world testing of autonomous driving systems." Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 2020.

Udacity Challenge. 2016. Steering angle model: Cg32. https://github.com/udacity/self-driving-car/tree/master/steeringmodels/community-models/cg23 (2016).

Udacity Challenge. 2016. Steering angle model: Rambo. https://github.com/udacity/self-driving-car/tree/master/steeringmodels/community-models/rambo (2016).

DeepBillboard https://github.com/deepbillboard/DeepBillboard Retrieved: Jun, 2024

Patel, Naman, et al. "Adaptive adversarial videos on roadside billboards: Dynamically modifying trajectories of autonomous vehicles." 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019.

9 Key Benefits of Mobile Billboard Advertising https://www.billups.com/articles/benefits-of-mobile-billboard-advertising Retrieved: Jun, 2024

Sato, Takami, et al. "Security of deep learning based lane keeping system under physical-world adversarial attack." arXiv preprint arXiv:2003.01782 (2020).

“OpenPilot: Open Source Driving Agent,” https://github.com/commaai/ openpilot, 2018

https://sites.google.com/view/lane-keeping-adv-attack/ Retrieved: Jun, 2024

Woitschek, Fabian, and Georg Schneider. "Physical adversarial attacks on deep neural networks for traffic sign recognition: A feasibility study." 2021 IEEE Intelligent vehicles symposium (IV). IEEE, 2021.

Sitawarin, Chawin, et al. "Rogue signs: Deceiving traffic sign recognition with malicious ads and logos." arXiv preprint arXiv:1801.02780 (2018).

Sitawarin, Chawin, et al. "Darts: Deceiving autonomous cars with toxic signs." arXiv preprint arXiv:1802.06430 (2018).

Lentikuljarnaja pechat' https://ru.wikipedia.org/wiki/%D0%9B%D0%B5%D0%BD%D1%82%D0%B8%D0%BA%D1%83%D0%BB%D1%8F%D1%80%D0%BD%D0%B0%D1%8F_%D0%BF%D0%B5%D1%87%D0%B0%D1%82%D1%8C Retrieved: Jun, 2024

Morgulis, Nir, et al. "Fooling a real car with adversarial traffic signs." arXiv preprint arXiv:1907.00374 (2019).

Han, Xingshuo, et al. "Physical backdoor attacks to lane detection systems in autonomous driving." Proceedings of the 30th ACM International Conference on Multimedia. 2022.

Chernikova, Alesia, et al. "Are self-driving cars secure? evasion attacks against deep neural networks for steering angle prediction." 2019 IEEE Security and Privacy Workshops (SPW). IEEE, 2019.

Arroyo, Miguel A., et al. "YOLO: frequently resetting cyber-physical systems for security." Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2019. Vol. 11009. SPIE, 2019.

Nguyen, Kien, et al. "Physical Adversarial Attacks for Surveillance: A Survey." IEEE Transactions on Neural Networks and Learning Systems (2023).

Prishletsov, Dmitry, Sergey Prishletsov, and Dmitry Namiot. "Camouflage as adversarial attacks on machine learning models." International Journal of Open Information Technologies 11.9 (2023): 41-49.

Zhang, Yang, et al. "CAMOU: Learning physical vehicle camouflages to adversarially attack detectors in the wild." International Conference on Learning Representations. 2018.

Song, Dawn, et al. "Physical adversarial examples for object detectors." 12th USENIX workshop on offensive technologies (WOOT 18). 2018.

Evtimov, Ivan, et al. "Robust physical-world attacks on machine learning models." arXiv preprint arXiv:1707.08945 2.3 (2017): 4.

Kumar, K. Naveen, et al. "Black-box adversarial attacks in autonomous vehicle technology." 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2020.

Cao, Yulong, et al. "Adversarial sensor attack on lidar-based perception in autonomous driving." Proceedings of the 2019 ACM SIGSAC conference on computer and communications security. 2019.

Namiot, D., and E. Ilyushin. "On Certification of Artificial Intelligence Systems." Physics of Particles and Nuclei 55.3 (2024): 343-346.

Magisterskaja programma «Iskusstvennyj intellekt v kiberbezopasnosti» (FGOS) https://cs.msu.ru/node/3732 Retrieved: Jun, 2024

Suhomlin, Vladimir Aleksandrovich, et al. "Kurrikulum discipliny" Kiberbezopasnost'"." (2022): 402-402.

Suhomlin, Vladimir Aleksandrovich. "Koncepcija i osnovnye harakteristiki magisterskoj programmy" Kiberbezopasnost'" fakul'teta VMK MGU." International Journal of Open Information Technologies 11.7 (2023): 143-148.

Roznichnaja torgovlja v cifrovoj jekonomike / V. P. Kuprijanovskij, S. A. Sinjagov, D. E. Namiot [i dr.] // International Journal of Open Information Technologies. – 2016. – T. 4, # 7. – S. 1-12. – EDN WCMIWN.

Razvitie transportno-logisticheskih otraslej Evropejskogo Sojuza: otkrytyj BIM, Internet Veshhej i kiber-fizicheskie sistemy / V. P. Kuprijanovskij, V. V. Alen'kov, A. V. Stepanenko [i dr.] // International Journal of Open Information Technologies. – 2018. – T. 6, # 2. – S. 54-100. – EDN YNIRFG.

Iskusstvennyj intellekt kak strategicheskij instrument jekonomicheskogo razvitija strany i sovershenstvovanija ee gosudarstvennogo upravlenija. Chast' 2. Perspektivy primenenija iskusstvennogo intellekta v Rossii dlja gosudarstvennogo upravlenija / I. A. Sokolov, V. I. Drozhzhinov, A. N. Rajkov [i dr.] // International Journal of Open Information Technologies. – 2017. – T. 5, # 9. – S. 76-101. – EDN ZEQDMT.


  • There are currently no refbacks.

Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162