Review and comparative analysis of attack and defence algorithms on graph-based ANN architectures

Dovlet Kirzhinov, Eugene Ilyushin

Abstract


Graphs are all around us; objects in the real world are often defined in terms of their relationships to other objects. A set of objects and the relationships between them are naturally expressed as a graph. Due to the meaningfulness of such a representation of data, which is generated by various artificial and natural processes, training neural networks on such data is a powerful tool. The spectrum of attacks on GNN (Graph Neural Network) architectures is very wide, and for each of the attack methods, it is required to develop and define effective defense techniques, and to investigate the attacks in terms of computational complexity for their possible application on large graphs used in real application cases. This paper is a survey in which the security of such graph neural network architectures is discussed, including attack algorithms and how to defend against them by improving robustness. It also provides some classification of these methods according to various criteria and a review of existing works on this topic.

Full Text:

PDF (Russian)

References


D.E. Namiot, E.A. Ilyshin, and I.V. Chizhov. “Attacks on Machine Learning Systems - Common Problems and Methods”. In: International Journal of Open Information Technologies 10.3 (2022), pp. 17–22.

Wei Jin et al. “Adversarial Attacks and Defenses on Graphs”. In: SIGKDD Explor. Newsl. (2021), pp. 19– 34.

Dan Hendrycks et al. “Unsolved problems in ml safety”. In: arXiv preprint arXiv:2109.13916 (2021).

D.E. Namiot, E.A. Ilyshin, and I.V. Chizhov. “On- going academic and industrial projects dedicated to robust machine learning”. In: International Journal of Open Information Technologies 9.10 (2021), pp. 35–46.

Daniel Zügner, Amir Akbarnejad, and Stephan Gün- nemann. “Adversarial attacks on neural networks for graph data”. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge dis- covery & data mining. 2018, pp. 2847–2856.

Shen Wang et al. “Adversarial defense frame- work for graph neural network”. In: arXiv preprint arXiv:1905.03679 (2019).

Lichao Sun et al. “Adversarial attack and defense on graph data: A survey”. In: IEEE Transactions on Knowledge and Data Engineering (2022).

Liang Chen et al. “A survey of adversarial learning on graphs”. In: arXiv preprint arXiv:2003.05730 (2020).

Hanjun Dai et al. “Adversarial attack on graph struc- tured data”. In: International conference on machine learning. PMLR. 2018, pp. 1115–1124.

Kaidi Xu et al. “Topology attack and defense for graph neural networks: An optimization perspective”. In: arXiv preprint arXiv:1906.04214 (2019).

Daniel Zügner and Stephan Günnemann. Adversarial Attacks on Graph Neural Networks via Meta Learning. 2019. arXiv: 1902.08412 [cs.LG].

Xiaoyun Wang et al. “Attack graph convolutional networks by adding fake nodes”. In: arXiv preprint arXiv:1810.10751 (2018).

Jinyin Chen et al. “Fast gradient attack on network em- bedding”. In: arXiv preprint arXiv:1809.02797 (2018).

Marcin Waniek et al. “Attack tolerance of link pre- diction algorithms: How to hide your relations in a social network”. In: arXiv preprint arXiv:1809.00152 (2018).

Jinyin Chen et al. “Link prediction adversarial attack”. In: arXiv preprint arXiv:1810.01110 (2018).

Jinyin Chen et al. “Time-aware gradient attack on dy- namic network link prediction”. In: IEEE Transactions on Knowledge and Data Engineering (2021).

Liang Chen et al. “Data poisoning attacks on neighborhood-based recommender systems”. In: Transactions on Emerging Telecommunications Technologies 32.6 (2021), e3872.

Huijun Wu et al. “Adversarial examples on graph data: Deep insights into attack and defense”. In: arXiv preprint arXiv:1903.01610 (2019).

Dingyuan Zhu et al. “Robust graph convolutional networks against adversarial attacks”. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 2019, pp. 1399– 1407.

Ke Sun et al. “Virtual adversarial training on graph convolutional networks in node classification”. In: Pattern Recognition and Computer Vision: Second Chinese Conference, PRCV 2019, Xi’an, China, November 8–11, 2019, Proceedings, Part I 2. Springer. 2019, pp. 431–443.

Fuli Feng et al. “Graph adversarial training: Dynami- cally regularizing based on graph structure”. In: IEEE Transactions on Knowledge and Data Engineering 33.6 (2019), pp. 2493–2504.

Jinyin Chen et al. “Can adversarial network attack be defended?” In: arXiv preprint arXiv:1903.05994 (2019).

Daniel Zügner and Stephan Günnemann. “Certifiable robustness and robust training for graph convolutional networks”. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019, pp. 246–256.

Felix Mujkanovic et al. “Are Defenses for Graph Neural Networks Robust?” In: Advances in Neural Information Processing Systems 35 (2022), pp. 8954–8968.

Stephan Günnemann. “Graph neural networks: Adver- sarial robustness”. In: Graph Neural Networks: Foun- dations, Frontiers, and Applications (2022), pp. 149– 176.

Wei Jin et al. “Graph structure learning for robust graph neural networks”. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 2020, pp. 66–74.


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность IT Congress 2024

ISSN: 2307-8162