Journal of Frontiers of Computer Science and Technology ›› 2024, Vol. 18 ›› Issue (8): 1935-1959.DOI: 10.3778/j.issn.1673-9418.2311117

• Frontiers·Surveys • Previous Articles     Next Articles

Advances of Adversarial Attacks and Robustness Evaluation for Graph Neural Networks

WU Tao, CAO Xinwen, XIAN Xingping, YUAN Lin, ZHANG Shu, CUI Canyixing, TIAN Kan   

  1. 1. School of Cyber Security and Information Law, Chongqing University of Posts and Telecommunications, Chong-qing 400065, China
    2. Engineering Laboratory of Network and Information Security, Chongqing 400065, China
    3. Joint Laboratory of Intelligent Museum, Chongqing University of Posts and Telecommunications-Chongqing China Three Gorges Museum, Chongqing 400065, China
  • Online:2024-08-01 Published:2024-07-29

图神经网络对抗攻击与鲁棒性评测前沿进展

吴涛,曹新汶,先兴平,袁霖,张殊,崔灿一星,田侃   

  1. 1. 重庆邮电大学 网络空间安全与信息法学院,重庆 400065
    2. 重庆市网络与信息安全技术工程实验室,重庆 400065
    3. 重庆邮电大学-重庆中国三峡博物馆智慧文博联合实验室,重庆 400065

Abstract: In recent years, graph neural networks (GNNs) have gradually become an important research direction in artificial intelligence. However, the adversarial vulnerability of GNNs poses severe challenges to their practical applications. To gain a comprehensive understanding of adversarial attacks and robustness evaluation on GNNs, related state-of-the-art advancements are reviewed and discussed. Firstly, this paper introduces the research background of adversarial attacks on GNNs, provides a formal definition of these attacks, and elucidates the basic concepts and framework for research on adversarial attacks and robustness evaluation in GNNs. Following this,  this paper gives an overview of the specific methods proposed in the field of adversarial attacks on GNNs, and details the foremost methods while categorizing them based on the type of adversarial attack and range of attack targets. Their operating mechanisms, principles, and pros and cons are also analyzed. Additionally, considering the model robustness evaluation's dependency on adversarial attack methods and adversarial perturbation degree,  this paper focuses on direct evaluation indicators. To aid in designing and evaluating adversarial attack methods and GNNs' robust models,  this paper compares representative methods considering implementation ease, accuracy, and execution time. This paper foresees ongoing challenges and future research areas. Current research on GNNs?? adversarial robustness is experiment-oriented, lacking a guiding theoretical framework, necessitating further systematic theoretical research to ensure GNN-based systems' trustworthiness.

Key words: graph neural network, adversarial vulnerability, adversarial attacks, robustness evaluation

摘要: 近年来,图神经网络(GNNs)逐渐成为人工智能的重要研究方向。然而,GNNs的对抗脆弱性使其实际应用面临严峻挑战。为了全面认识GNNs对抗攻击与鲁棒性评测的研究工作,对相关前沿进展进行梳理和分析讨论。介绍GNNs对抗攻击的研究背景,给出GNNs对抗攻击的形式化定义,阐述GNNs对抗攻击及鲁棒性评测的研究框架和基本概念。对GNNs对抗攻击领域所提具体方法进行了总结和梳理,并对其中的前沿方法从对抗攻击类型和攻击目标范围的角度进行详细分类阐述,分析了它们的工作机制、原理和优缺点。考虑到基于对抗攻击的模型鲁棒性评测依赖于对抗攻击方法的选择和对抗扰动程度,只能实现间接、局部的评价,难以全面反映模型鲁棒性的本质特征,从而着重对模型鲁棒性的直接评测指标进行了梳理和分析。在此基础上,为了支撑GNNs对抗攻击方法和鲁棒性模型的设计与评价,通过实验从易实现程度、准确性、执行时间等方面对代表性的GNNs对抗攻击方法进行了对比分析。对存在的挑战和未来研究方向进行展望。总体而言,目前GNNs对抗鲁棒性研究以反复实验为主,缺乏具有指导性的理论框架。如何保障基于GNNs的深度智能系统的可信性,仍需进一步系统性的基础理论研究。

关键词: 图神经网络, 对抗脆弱性, 对抗攻击, 鲁棒性评测