On the reasons for the failures of machine learning projects

Dmitry Namiot, Eugene Ilyushin

Abstract


This article analyzes the errors and causes of failure of projects using machine learning. Technically, according to academic articles, the percentage of failed projects is quite high. Machine learning systems naturally depend on data. Therefore, the simplest answer to the question about failures is an explanation related to data problems. But the problems with the success of projects are actually quite large - figures such as 87% of unsuccessful projects are given in the literature. Therefore, more detailed explanations are needed - in the face of such a large number of failures, the task of analyzing such errors becomes more than relevant. The article, based on many analyzed works, presents summary data on errors and failures of projects using machine learning, and analyzes the relationship of these causes with the requirements for the stability of designed systems. It is shown that most of the reasons are, in fact, the lack of stability for machine learning systems. The paper also shows the importance of the transition to data-centric systems, and presents forecasts for the further development of machine learning models for critical applications.

Full Text:

PDF (Russian)

References


Namiot D., Ilyushin E., Chizhov I. Ongoing academic and industrial projects dedicated to robust machine learning //International Journal of Open Information Technologies. – 2021. – Т. 9. – №. 10. – С. 35-46. (in Russian)

Namiot D., Ilyushin E., Chizhov I. The rationale for working on robust machine learning //International Journal of Open Information Technologies. – 2021. – Т. 9. – №. 11. – С. 68-74. (in Russian)

Artificial Intelligence in Cybersecurity. https://cs.msu.ru/node/3732 (in Russian) Retrieved: Dec, 2022

Tyler Folkman Machine learning: introduction, monumental failure, and hope https://towardsdatascience.com/machine-learning-introduction-monumental-failure-and-hope-65a8c6098a92 Retrieved: Dec, 2022

These Are The Reasons Why More Than 95% AI and ML Projects Fail https://medium.com/vsinghbisen/these-are-the-reasons-why-more-than-95-ai-and-ml-projects-fail-cd97f4484ecc#:~:text=Survey%20Statistic%20Why%20AI%2FML,training%20data%20on%20their%20own. Retrieved: Dec, 2022

Ermakova, Tatiana, et al. "Beyond the Hype: Why Do Data-Driven Projects Fail?." Proceedings of the 54th Hawaii International Conference on System Sciences. 2021.

Reproducibility crisis of ML https://petewarden.com/2018/03/19/the-machine-learning-reproducibility-crisis Retrieved: Dec, 2022

Papers without code http://paperswithoutcode.com/ Retrieved: Dec, 2022

Things-that-can-go-wrong-in-a-real-world-ml-project https://towardsdatascience.com/51-things-that-can-go-wrong-in-a-real-world-ml-project-c36678065a75 Retrieved: Dec, 2022

Sambasivan, Nithya, et al. "Everyone wants to do the model work, not the data work": Data Cascades in High-Stakes AI." (2021).

Northcutt, Curtis G., Anish Athalye, and Jonas Mueller. "Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks." arXiv preprint arXiv:2103.14749 (2021).

Northcutt, Curtis G., Lu Jiang, and Isaac L. Chuang. "Confident learning: Estimating uncertainty in dataset labels." arXiv preprint arXiv:1911.00068 (2019).

Data Cascades: Why we need feedback channels throughout the machine learning lifecycle https://gradientflow.com/data-cascades-why-we-need-feedback-channels-throughout-the-machine-learning-lifecycle

Nushi, Besmira, et al. "On human intellect and machine failures: Troubleshooting integrative machine learning systems." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 31. No. 1. 2017.

Tomsett, Richard, et al. "Why the failure? how adversarial examples can provide insights for interpretable machine learning." 2018 21st International Conference on Information Fusion (FUSION). IEEE, 2018.

A.I. Is Solving the Wrong Problem https://onezero.medium.com/a-i-is-solving-the-wrong-problem-253b636770cd. Retrieved: Dec, 2022

Bengio, Yoshua, Yann Lecun, and Geoffrey Hinton. "Deep learning for AI." Communications of the ACM 64.7 (2021): 58-65.

Pitropakis, Nikolaos, et al. "A taxonomy and survey of attacks against machine learning." Computer Science Review 34 (2019): 100199.

A Complete List of All (arXiv) Adversarial Example Papers https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html Retrieved: Dec 2022

Deniz, Oscar, et al. "Robustness to adversarial examples can be improved with overfitting." International Journal of Machine Learning and Cybernetics 11.4 (2020): 935-944.

Rice, Leslie, Eric Wong, and Zico Kolter. "Overfitting in adversarially robust deep learning." International Conference on Machine Learning. PMLR, 2020.

Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "Artificial intelligence and cybersecurity." International Journal of Open Information Technologies 10.9 (2022): 135-147.

Gunning, David, et al. "XAI—Explainable artificial intelligence." Science Robotics 4.37 (2019).

Hamon, Ronan, Henrik Junklewitz, and Ignacio Sanchez. "Robustness and explainability of artificial intelligence." Publications Office of the European Union (2020).

Amrani, Moussa, Levi Lúcio, and Adrien Bibal. "ML+ FV= $heartsuit $? A Survey on the Application of Machine Learning to Formal Verification." arXiv preprint arXiv:1806.03600 (2018).

Hynes, Nick, D. Sculley, and Michael Terry. "The data linter: Lightweight, automated sanity checking for ml data sets." NIPS MLSys Workshop. 2017.

Data Linter https://github.com/brain-research/data-linter Retrieved: Dec, 2022

Data-linter https://pypi.org/project/data-linter/ Retrieved: Dec, 2022

Ng, Andrew. "From Model-centric to Data-centric AI." (2021).

Namiot, Dmitry, and Eugene Ilyushin. "On the robustness and security of Artificial Intelligence systems." International Journal of Open Information Technologies 10.9 (2022): 126-134.

Bernardi, Lucas, Themistoklis Mavridis, and Pablo Estevez. "150 successful machine learning models: 6 lessons learned at booking. com." Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 2019.

Yi, Jeonghee, et al. "Predictive model performance: Offline and online evaluations." Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. 2013.

Namiot, Dmitry, et al. "On the applicability and limitations of formal verification of machine learning systems." (2021).

Black swan theory https://en.wikipedia.org/wiki/Black_swan_theory Retrieved: Dec, 2021

Geirhos, Robert, et al. "Shortcut learning in deep neural networks." Nature Machine Intelligence 2.11 (2020): 665-673.

Kaufman, Shachar, et al. "Leakage in data mining: Formulation, detection, and avoidance." ACM Transactions on Knowledge Discovery from Data (TKDD) 6.4 (2012): 1-21.

Namiot, Dmitry, and Eugene Ilyushin. "Data shift monitoring in machine learning models." International Journal of Open Information Technologies 10.12 (2022): 84-93.

WFCES 2022 https://ide-rus.ru/wfces2022 Retrieved: Dec, 2022


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162