On Trusted AI Platforms

Dmitry Namiot, Eugene Ilyushin, Oleg Pilipenko

Abstract


The development and use of artificial intelligence systems (machine learning) in critical areas (avionics, autonomous movement, etc.) inevitably raise the question of the reliability of the software used. Trusted computing systems have been around for a long time. Their meaning is to allow the execution of only certain applications and guarantee against interference with the work of such applications. Trust in this case is the confidence that the assigned applications work as they did when tested. But in the case of machine learning, this is not enough. The application can work as intended, there is no intervention, but the results cannot be trusted simply because the data has changed. In general, this problem is a consequence of a fundamental point for all machine learning systems - the data at the testing (operation) stage may differ from the same data on which the system was trained. Accordingly, a violation of the machine learning system is possible without targeted actions, simply because we encountered data at the operational stage for which the generalization achieved at the training stage does not work. And there are also attacks, which are understood as special actions on the elements of the machine learning pipeline (training data, the model itself, test data) in order to either achieve the desired behavior of the system or prevent it from working correctly. Today, this problem, which is generally associated with the stability of machine learning systems, is the main obstacle to the use of machine learning in critical applications.

Full Text:

PDF (Russian)

References


Trusted Computing https://en.wikipedia.org/wiki/Trusted_Computing

Confidential computing https://confidentialcomputing.io/

Doveritel'nye vychislenija https://intuit.ru/studies/courses/955/285/lecture/7166

Markov, A. S., and A. A. Fadin. "Sistematika ujazvimostej i defektov bezopasnosti programmnyh resursov." Zashhita informacii. Insajd 3 (2013): 56-61.

Namiot, D. E., E. A. Il'jushin, and I. V. Chizhov. "Osnovanija dlja rabot po ustojchivomu mashinnomu obucheniju." International Journal of Open Information Technologies 9.11 (2021): 68-74.

Gama, Joao, et al. "Learning with drift detection." Brazilian symposium on artificial intelligence. Springer, Berlin, Heidelberg, 2004.

Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. "Adversarial machine learning at scale." arXiv preprint arXiv:1611.01236 (2016).

Robust and Verified Deep Learning group https://deepmindsafetyresearch.medium.com/towards-robust-and-verified-ai-specification-testing-robust-training-and-formal-verification-69bd1bc48bda

Madry Lab https://people.csail.mit.edu/madry/6.S979/files/lecture_4.pdf Retrieved: May, 2022

Principles for evaluation of AI/ML model performance and robustness file:///C:/temp/principles-evaluation-aiml-model-performance-brown-md-62.pdf Retrieved: May, 2022

Machine Learning for Trusted Platform Design A. Raychowdhury & M. Swaminathan, GaTechhttp://publish.illinois.edu/caeml-industry/files/2018/07/2A2-Machine-Learning-for-Trusted-Platform-Design-1.pdf Retrieved: May, 2022

Creating a trusted platform for embedded security-critical applications https://militaryembedded.com/avionics/software/creating-a-trusted-platform-for-embedded-security-critical-applications Retrieved: May, 2022

How do you teach AI the value of trust? https://assets.ey.com/content/dam/ey-sites/ey-com/en_gl/topics/digital/ey-how-do-you-teach-ai-the-value-of-trust.pdf Retrieved: May, 2022

Lavin, Alexander, et al. "Technology readiness levels for machine learning systems." arXiv preprint arXiv:2101.03989 (2021).

ALTAI https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence Retrieved: May, 2022

Ethics guidelines for trustworthy AI https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai Retrieved: May, 2022

STAR https://star-ai.eu/ Retrieved: May, 2022

Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing https://library.oapen.org/handle/20.500.12657/52612 Retrieved: May, 2022

Datarobot https://www.datarobot.com/platform/trusted-ai/ Retrieved: May, 2022

He, Xin, Kaiyong Zhao, and Xiaowen Chu. "AutoML: A survey of the state-of-the-art." Knowledge-Based Systems 212 (2021): 10662

ONNX https://onnx.ai/ Retrieved: May, 2022

AI Report https://aireport.ru/ Retrieved: May, 2022

Unificirovannaja programmnaja platforma dlja razrabotki konechno orientirovannyh programmnyh kompleksov avtomaticheskogo raspoznavanija ob"ektov na osnove nejrosetevyh podhodov https://www.gosniias.ru/pages/d/platforma-fh.pdf Retrieved: May, 2022

NSCAI report https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf Retrieved: May, 2022

Trusted AI https://research.ibm.com/teams/trusted-ai Retrieved: May, 2022

Tools for trustworthy AI https://www.oecd-ilibrary.org/content/paper/008232ec-en Retrieved: May, 2022

Namiot, D. E., E. A. Il'jushin, I. V. Chizhov. "Tekushhie akademicheskie i industrial'nye proekty, posvjashhennye ustojchivomu mashinnomu obucheniju." International Journal of Open Information Technologies 9.10 (2021): 35-46.

Holistic Evaluation of Adversarial Defenses | GARD Project https://www.gardproject.org/ Retrieved: May, 2022

Artificial Intelligence and avionics software per DO-178C https://afuzion.com/artificial-intelligence-and-avionics-software-per-do-178c/ Retrieved: May, 2022

EA Il'jushin, DE Namiot, IV Chizhov. "Ataki na sistemy mashinnogo Obuchenija - obshhie problemy i metody." International Journal Of Open Information Technologies 10.3 (2022): 17-22.

Artificial Intelligence in Cybersecurity. http://master.cmc.msu.ru/?q=ru/node/3496 (in Russian) Retrieved: May, 2022.

AI Tradeoff: Accuracy or Robustness? https://www.eetimes.com/ai-tradeoff-accuracy-or-robustness/ Retrieved: May, 2022.


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162