Ongoing academic and industrial projects dedicated to robust machine learning

Dmitry Namiot, Eugene Ilyushin, Ivan Chizhov


With the growing use of systems based on machine learning, which, from a practical point of view, are considered as systems of artificial intelligence today, the attention to the issues of reliability (robustness) of such systems and solutions is also growing. Naturally, for critical applications, for example, systems that make decisions in real time, robustness issues are the most important from the point of view of the practical use of machine learning systems. In fact, it is the robustness assessment that determines the very possibility of using machine learning in such systems. This, in a natural way, is reflected in a large number of works devoted to the issues of assessing the robustness of machine learning systems, the architecture of such systems and the protection of machine learning systems from malicious actions that can affect their operation. At the same time, it is necessary to understand that robustness problems can arise both naturally, due to the different distribution of data at the stages of training and practical application (at the stage of training the model, we use only part of the data from the general population), and as a result of targeted actions (attacks on machine learning systems). In this case, attacks can be directed both at the data and at the models themselves.

Full Text:

PDF (Russian)


Qayyum, Adnan, et al. "Secure and robust machine learning for healthcare: A survey." IEEE Reviews in Biomedical Engineering 14 (2020): 156-180.

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199 , 2013

A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks,” in Advances in Neural Information Processing Systems, 2018, pp. 6103–6113.

X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning,” IEEE transactions on neural networks and learning systems, 2019.

A Complete List of All (arXiv) Adversarial Example Papers Retrieved: Aug, 2021

Artificial Intelligence in Cybersecurity. (in Russian) Retrieved: Aug, 2021

Koh, Pang Wei, et al. "Wilds: A benchmark of in-the-wild distribution shifts." International Conference on Machine Learning. PMLR, 2021.

Nair, Nimisha G., Pallavi Satpathy, and Jabez Christopher. "Covariate shift: A review and analysis on classifiers." 2019 Global Conference for Advancement in Technology (GCAT). IEEE, 2019.

Major ML datasets have tens of thousands of errors Retrieved: Aug, 2021


Pei, Kexin, et al. "Towards practical verification of machine learning: The case of computer vision systems." arXiv preprint arXiv:1712.01785 (2017).

Katz, Guy, et al. "The marabou framework for verification and analysis of deep neural networks." International Conference on Computer Aided Verification. Springer, Cham, 2019.

Shafique, Muhammad, et al. "Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead." IEEE Design & Test 37.2 (2020): 30-57.

The Pentagon Is Bolstering Its AI Systems—by Hacking Itself Retrieved: Aug, 2021

Poison in the Well Securing the Shared Resources of Machine Learning Retrieved: Aug, 2021


Guaranteeing AI Robustness Against Deception Retrieved: Aug, 2021

DARPA is pouring millions into a new AI defense program. Here are the companies leading the charge Retrieved: Aug, 2021

UT Austin Selected as Home of National AI Institute Focused on Machine Learning Retrieved: Aug, 2021

UT Austin Launches Institute to Harness the Data Revolution Retrieved: Aug, 2021

National Security Education Center Retrieved: Aug, 2021

2021 Project Descriptions Creates next-generation leaders in Machine Learning for Scientific Applications Retrieved: Aug, 2021

Assured Machine Learning: Robustness, Fairness, and Privacy Retrieved: Aug, 2021

Explainable Artificial Intelligence Retrieved: Aug, 2021

Advancing Machine Learning for Mission-Critical Applications Retrieved: Aug, 2021

Abdar, Moloud, et al. "A review of uncertainty quantification in deep learning: Techniques, applications and challenges." Information Fusion (2021).

Intelligence Advanced Research Projects Activity (IARPA) Retrieved: Aug, 2021

Trojans in Artificial Intelligence Retrieved: Aug, 2021

Trojans in Artificial Intelligence bibliography Retrieved: Aug, 2021

ELLIS Programs launched Retrieved: Aug, 2021

ELLIS programs Retrieved: Aug, 2021

Robust Machine Learning Retrieved: Aug, 2021

Semantic, Symbolic and Interpretable Machine Learning Retrieved: Aug, 2021

Oomen, Thomas L. "Why the EU lacks behind China in AI development–Analysis and solutions to enhance EU’s AI strategy." rue 33.1: 7543.

MIT Reliable and Robust Machine Learning Retrieved: Aug, 2021

Alexander Madry Retrieved: Aug, 2021

Center for Deployable Machine Learning (CDML) Retrieved: Aug, 2021

Madry Lab Retrieved: Aug, 2021

Robustness package Retrieved: Aug, 2021

Adversarial ML tutorial Retrieved: Aug, 2021

RobustML Retrieved: Aug, 2021

Andriushchenko, Maksym, and Matthias Hein. "Provably robust boosted decision stumps and trees against adversarial attacks." arXiv preprint arXiv:1906.03526 (2019).

Identifying and eliminating bugs in learned predictive models Retrieved: Aug, 2021

Nandy, Abhishek, and Manisha Biswas. "Google’s DeepMind and the Future of Reinforcement Learning." Reinforcement Learning. Apress, Berkeley, CA, 2018. 155-163.

Fairness & Robustness in Machine Learning Retrieved: Aug, 2021

Safe Artificial Intelligence Retrieved: Aug, 2021

Latticeflow Retrieved: Aug, 2021

Reliability Assessment of Traffic Sign Classifiers Retrieved: Aug, 2021

The Alan Turing Institute Robust machine learning Retrieved: Aug, 2021

AI roadmap Retrieved: Aug, 2021

The Alan Turing Institute Adversarial machine learning Retrieved: Aug, 2021

Malinin, Andrey, et al. "Shifts: A Dataset of Real Distributional Shift Across Multiple Large-Scale Tasks." arXiv preprint arXiv:2107.07455 (2021).

Oxford Applied and Theoretical Machine Learning Group Retrieved: Aug, 2021

Adversarial and Interpretable ML — Publications Retrieved: Aug, 2021

Allen Institute for AI Retrieved: Aug, 2021

AI2 Machine Learning Seminars Retrieved: Aug, 2021

Verified AI Retrieved: Aug, 2021

Sanjit A. Seshia research group Retrieved: Aug, 2021

Bosch AI Retrieved: Aug, 2021

Rich and Explainable Deep Learning Retrieved: Aug, 2021

Research Engineer – Robust and Explainable AI Methods Retrieved: Aug, 2021

Yandex Shift Challenge Retrieved: Aug, 2021

Adversa Retrieved: Aug, 2021

De Jimenez, Rina Elizabeth Lopez. "Pentesting on web applications using ethical-hacking." 2016 IEEE 36th Central American and Panama Convention (CONCAPAN XXXVI). IEEE, 2016.

The Road to Secure and Trusted AI Retrieved: Aug, 2021

Robust AI Retrieved: Aug, 2021

Bengio, Yoshua, Yann Lecun, and Geoffrey Hinton. "Deep learning for AI." Communications of the ACM 64.7 (2021): 58-65.

Center for Long-Term Cybersecurity University of California, Berkeley. Retrieved: Sep, 2021

Center for Long-Term Cybersecurity University of California, Berkeley, Robust ML. Retrieved: Sep, 2021

Kuprijanovskij, V. P., et al. "Optimizacija ispol'zovanija resursov v cifrovoj jekonomike." International Journal of Open Information Technologies 4.12 (2016).

Kuprijanovskij, V. P., et al. "Cifrovaja jekonomika i Internet Veshhej-preodolenie silosa dannyh." International Journal of Open Information Technologies 4.8 (2016): 36-42.

Incident Database Retrieved: Sep, 2021

Robust and Secure AI Retrieved: Sep, 2021

Artificial Intelligence Engineering Retrieved: Sep, 2021


  • There are currently no refbacks.

Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162