Journal of Frontiers of Computer Science and Technology ›› 2023, Vol. 17 ›› Issue (8): 1949-1960.DOI: 10.3778/j.issn.1673-9418.2204109

• Artificial Intelligence·Pattern Recognition • Previous Articles     Next Articles

Graph Neural Network Defense Combined with Contrastive Learning

CHEN Na, HUANG Jincheng, LI Ping   

  1. College of Computer Science, Southwest Petroleum University, Chengdu 610500, China
  • Online:2023-08-01 Published:2023-08-01

结合对比学习的图神经网络防御方法

陈娜,黄金诚,李平   

  1. 西南石油大学 计算机科学学院,成都 610500

Abstract: Although graph neural networks have achieved good performance in the field of graph representation learning, recent studies have shown that graph neural networks are more susceptible to adversarial attacks on graph structure, namely, by adding well-designed perturbations to the graph structure, the performance of graph neural network drops sharply. At present, although mainstream graph structure denoising methods can effectively resist graph structure adversarial attacks, due to the uncertainty of the degree of adversarial attack on the input graph, such methods are prone to more misidentifications when the input graph is not attacked or the attack intensity is small, which damages the prediction results of the graph neural network. To alleviate this problem, this paper proposes a graph neural network defense method combined with contrastive learning (CLD-GNN). Firstly, on the basis of feature similarity denoising, according to the characteristics of label inconsistency between edge endpoints after attack, the label propagation algorithm is used to obtain pseudo-labels of unlabeled nodes, and possible perturbed edges are removed according to the pseudo-label inconsistency between endpoints, resulting in the purification graph. Then, graph convolution is performed on the purification and input graph respectively. Finally, contrastive learning is applied to aligning the predicted label information on the two graphs and modifying the feature representation of the purification graph nodes. Defense experiments are conducted on 3 benchmark datasets and 2 attack scenarios for graph adversarial attacks. Experimental results show that CLD-GNN not only solves the problem of graph denoising methods and prediction effects, but also exhibits excellent defense ability.

Key words: graph neural network, graph structure adversarial attacks, graph neural network defense

摘要: 尽管图神经网络在图表示学习领域中已取得了较好性能,然而研究表明图神经网络易受图结构对抗攻击,即通过对图结构添加精心设计的扰动会使图神经网络的性能急剧下降。目前,主流的图结构去噪方法虽能有效防御图结构对抗攻击,但由于输入图遭受对抗攻击程度的不确定性,该类方法在输入图未受攻击或攻击强度较小时易产生较多误识别,反而损害图神经网络预测结果。对此,提出一种结合对比学习的图神经网络防御方法(CLD-GNN)。该方法在基于特征相似性去噪的基础上,根据攻击后连边端点间标签不一致的特点,使用标签传播算法获取未标记节点的伪标签,基于连边端点间伪标签的不一致性去除可能的攻击边,获得净化图;然后分别对净化图和输入图进行图卷积;最后应用对比学习对齐两个图上的预测标签信息,修正净化图节点的特征表示。在图对抗攻击的3个基准数据集、2个攻击场景上进行防御实验,实验结果表明,CLD-GNN不仅缓解了图去噪方法损害节点分类预测效果问题,而且还能在较强攻击场景下表现出较优异的防御能力。

关键词: 图神经网络, 图结构对抗攻击, 图神经网络防御