The rationale for working on robust machine learning

Dmitry Namiot, Eugene Ilyushin, Ivan Chizhov


With the growing use of systems based on machine learning, which, from a practical point of view, are considered as systems of artificial intelligence today, attention to the issues of reliability (stability) of such systems and solutions is also growing. For so-called critical applications such as real-time decision-making systems, special systems, etc. sustainability issues are crucial from the point of view of the practical use of machine learning systems. The use of machine learning systems (artificial intelligence systems, which is now, in fact, a synonym) in such areas is possible only with the proof of stability (determination of guaranteed performance parameters). Resiliency problems arise from different characteristics of the data during training (training) and testing (practical application). At the same time, additional complexity is created by the fact that, in addition to natural reasons (unbalanced samples, measurement errors, etc.), the data can be deliberately modified. These are the so-called attacks on machine learning systems. Accordingly, it is impossible to talk about the reliability of machine learning systems without protection against such actions. In this case, attacks can be directed both at the data and at the models themselves.

Full Text:

PDF (Russian)


Artificial Intelligence in Cybersecurity. (in Russian) Retrieved: Sep, 2021

Namiot D., Ilyushin E., Chizhov I. Ongoing academic and industrial projects dedicated to robust machine learning //International Journal of Open Information Technologies. – 2021. – Т. 9. – №. 10. – С. 35-46.

Qayyum, Adnan, et al. "Secure and robust machine learning for healthcare: A survey." IEEE Reviews in Biomedical Engineering 14 (2020): 156-180.

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199 , 2013

A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks,” in Advances in Neural Information Processing Systems, 2018, pp. 6103–6113.

X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning,” IEEE transactions on neural networks and learning systems, 2019.

A Complete List of All (arXiv) Adversarial Example Papers Retrieved: Sep, 2021

Xu, H., Mannor, S.: Robustness and generalization. In: COLT, pp. 503–515 (2010)

"A Model-Based Approach for Robustness Testing" (PDF). Retrieved 2016-11-13.

IEEE Standard Glossary of Software Engineering Terminology, IEEE Std 610.12-1990

Tyler Folkman Machine learning: introduction, monumental failure, and hope

Foundations for AI errors Retrieved: Sep, 2021

Tsipras, Dimitris, et al. "From imagenet to image classification: Contextualizing progress on benchmarks." International Conference on Machine Learning. PMLR, 2020.

Stat 260 Retrieved: Sep, 2021

Francisco Herrera Dataset Shift in Classification: Approaches and Problems Retrieved: Sep, 2021

Jacob Steinhardt Retrieved: Sep, 2021

DeepMind Robust and Verified Deep Learning group Retrieved: Sep, 2021

Madry Lab Retrieved: Sep, 2021

Shafique, Muhammad, et al. "Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead." IEEE Design & Test 37.2 (2020): 30-57.

Yahoo Research AI Retrieved: Sep, 2021

Namiot, Dmitry. "Context-Aware Browsing--A Practical Approach." 2012 Sixth International Conference on Next Generation Mobile Applications, Services and Technologies. IEEE, 2012.

Namiot, Dmitry, and Manfred Sneps-Sneppe. "Proximity as a service." 2012 2nd Baltic Congress on Future Internet Communications. IEEE, 2012.

Rojo, Jordi, et al. "Machine learning applied to wi-fi fingerprinting: The experiences of the ubiqum challenge." 2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, 2019.

Souyris, Jean, et al. "Formal verification of avionics software products." International symposium on formal methods. Springer, Berlin, Heidelberg, 2009.

DO-178C cert-kit for airborne machine learning to be researched by Intelligent Artifacts Retrieved: Sep, 2021

Seshia, Sanjit A., et al. "Formal specification for deep neural networks." International Symposium on Automated Technology for Verification and Analysis. Springer, Cham, 2018.

Li, Guofu, et al. "Security matters: A survey on adversarial machine learning." arXiv preprint arXiv:1810.07339 (2018).


  • There are currently no refbacks.

Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162