计算机科学与探索 ›› 2024, Vol. 18 ›› Issue (8): 1935-1959.DOI: 10.3778/j.issn.1673-9418.2311117
吴涛,曹新汶,先兴平,袁霖,张殊,崔灿一星,田侃
出版日期:
2024-08-01
发布日期:
2024-07-29
WU Tao, CAO Xinwen, XIAN Xingping, YUAN Lin, ZHANG Shu, CUI Canyixing, TIAN Kan
Online:
2024-08-01
Published:
2024-07-29
摘要: 近年来,图神经网络(GNNs)逐渐成为人工智能的重要研究方向。然而,GNNs的对抗脆弱性使其实际应用面临严峻挑战。为了全面认识GNNs对抗攻击与鲁棒性评测的研究工作,对相关前沿进展进行梳理和分析讨论。介绍GNNs对抗攻击的研究背景,给出GNNs对抗攻击的形式化定义,阐述GNNs对抗攻击及鲁棒性评测的研究框架和基本概念。对GNNs对抗攻击领域所提具体方法进行了总结和梳理,并对其中的前沿方法从对抗攻击类型和攻击目标范围的角度进行详细分类阐述,分析了它们的工作机制、原理和优缺点。考虑到基于对抗攻击的模型鲁棒性评测依赖于对抗攻击方法的选择和对抗扰动程度,只能实现间接、局部的评价,难以全面反映模型鲁棒性的本质特征,从而着重对模型鲁棒性的直接评测指标进行了梳理和分析。在此基础上,为了支撑GNNs对抗攻击方法和鲁棒性模型的设计与评价,通过实验从易实现程度、准确性、执行时间等方面对代表性的GNNs对抗攻击方法进行了对比分析。对存在的挑战和未来研究方向进行展望。总体而言,目前GNNs对抗鲁棒性研究以反复实验为主,缺乏具有指导性的理论框架。如何保障基于GNNs的深度智能系统的可信性,仍需进一步系统性的基础理论研究。
吴涛, 曹新汶, 先兴平, 袁霖, 张殊, 崔灿一星, 田侃. 图神经网络对抗攻击与鲁棒性评测前沿进展[J]. 计算机科学与探索, 2024, 18(8): 1935-1959.
WU Tao, CAO Xinwen, XIAN Xingping, YUAN Lin, ZHANG Shu, CUI Canyixing, TIAN Kan. Advances of Adversarial Attacks and Robustness Evaluation for Graph Neural Networks[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(8): 1935-1959.
[1] ZüGNER D, AKBARNEJAD A, GüNNEMANN S. Adversarial attacks on neural networks for graph data[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, Aug 19-23, 2018. New York: ACM, 2018: 2847-2856. [2] CHEN J Y, CHEN Y X, ZHENG H B, et al. MGA: momentum gradient attack on network[J]. IEEE Transactions on Computational Social Systems, 2020, 8(1): 99-109. [3] XU K D, CHEN H G, LIU S J, et al. Topology attack and defense for graph neural networks: an optimization perspective[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, Aug 10-16, 2019. Freiburg: IJCAI, 2019: 3961-3967. [4] WANG J H, LUO M N, SUYA F, et al. Scalable attack on graph data by injecting vicious nodes[J]. Data Mining and Knowledge Discovery, 2020, 34: 1363-1389. [5] ZüGNER D, GüNNEMANN S. Adversarial attacks on graph neural networks via meta learning[C]//Proceedings of the 7th International Conference on Learning Representations, New Orleans, May 6-9, 2019. New York: ICML, 2019. [6] ZHAO M C, AN B, YU Y D, et al. Data poisoning attacks on multi-task relationship learning[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, Feb 2-7, 2018. Palo Alto: AAAI, 2018: 2628-2635. [7] WU H J, WANG C, TYSHETSKIY Y, et al. Adversarial examples for graph data: deep insights into attack and defense[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, Aug 10-16, 2019. Freiburg: IJCAI, 2019: 4816-4823. [9] MA J Q, DING S R, MEI Q Z. Towards more practical adversarial attacks on graph neural networks[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouve, Dec 6-12, 2020. New York: Curran Associates, 2020: 4756-4766. [9] BASTANI O, IOANNOU Y, LAMPROPOULOS L, et al. Measuring neural net robustness with constraints[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Dec 5-10, 2016. New York: Curran Associates, 2016: 2621-2629. [10] MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Piscataway: IEEE, 2016: 2574-2582. [11] ZHOU M, PATEL V M. Enhancing adversarial robustness for deep metric learning[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, Jun 18-24, 2022. Piscataway: IEEE, 2022: 15325-15334. [12] XU H, MA Y, LIU H C, et al. Adversarial attacks and defenses in images, graphs and text: a review[J]. International Journal of Automation and Computing, 2020, 17: 151-178. [13] 李自拓, 孙建彬, 杨克巍, 等. 面向图像分类的对抗鲁棒性评估综述[J]. 计算机研究与发展, 2022, 59(10): 2164-2189. LI Z T, SUN J B, YANG K W, et al. A review of adversarial robustness evaluation for image classification[J]. Journal of Computer Research and Development, 2022, 59(10): 2164-2189. [14] CHEN L, LI J T, PENG J Y, et al. A survey of adversarial learning on graphs[EB/OL]. [2023-10-20]. https://arxiv.org/abs/2003.05730. [15] 陈晋音, 张敦杰, 黄国瀚, 等. 面向图神经网络的对抗攻击与防御综述[J]. 网络与信息安全学报, 2021, 7(3): 1-28. CHEN J Y, ZHANG D J, HUANG G H, et al. Adversarial attack and defense on graph neural networks: a survey[J]. Chinese Journal of Network and Information Security, 2021, 7(3): 1-28. [16] JIN W, LI Y X, XU H, et al. Adversarial attacks and defenses on graphs[J]. ACM SIGKDD Explorations Newsletter, 2021, 22(2): 19-34. [17] 任一支, 李泽龙, 袁理锋, 等. 图深度学习攻击模型综述[J]. 信息安全学报, 2022, 7(1): 66-83. REN Y Z, LI Z L, YUAN L F, et al. Attack deep learning on graphs: a survey[J]. Journal of Cyber Security, 2022, 7(1): 66-83. [18] SUN L C, DOU Y T, YANG C, et al. Adversarial attack and defense on graph data: a survey[J]. IEEE Transactions on Knowledge and Data Engineering, 2022, 35(8): 7693-7711. [19] DAI E Y, ZHAO T X, ZHU H S, et al. A comprehensive survey on trustworthy graph neural networks: privacy, robustness, fairness, and explainability[EB/OL]. [2023-10-20]. https:// arxiv.org/abs/2204.08570. [20] 先兴平, 吴涛, 乔少杰, 等. 图学习隐私与安全问题研究综述[J]. 计算机学报, 2023, 46(6): 1184-1212. XIAN X P, WU T, QIAO S J, et al. Towards privacy and security of graph learning: a survey[J]. Chinese Journal of Computer, 2023, 46(6): 1184-1212. [21] CHEN J Y, WU Y Y, XU X H, et al. Fast gradient attack on network embedding[EB/OL]. [2023-10-20]. https://arxiv.org/abs/1809.02797. [22] WANG X Y, CHENG M H, EATON J, et al. Attack graph convolutional networks by adding fake nodes[EB/OL]. [2023-10-20]. https://arxiv.org/abs/1810.10751. [23] SUN M J, TANG J, LI H C, et al. Data poisoning attack against unsupervised node embedding methods[EB/OL]. [2023-10-20]. https://arxiv.org/abs/1810.12881. [24] WANIEK M, ZHOU K, VOROBEYCHIK Y, et al. Attack tolerance of link prediction algorithms: how to hide your relations in a social network[EB/OL]. [2023-10-20]. https://arxiv.org/abs/1809.00152. [25] WANG B H, GONG N Z. Attacking graph-based classification via manipulating the graph structure[C]//Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, Nov 11-15, 2019. New York: ACM, 2019: 2023-2040. [26] BOJCHEVSKI A, GüNNEMANN S. Adversarial attacks on node embeddings via graph poisoning[C]//Proceedings of the 2019 International Conference on Machine Learning, Long Beach, Jun 9-15, 2019: 695-704. [27] ZHANG H, ZHENG T, GAO J, et al. Data poisoning attack against knowledge graph embedding[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, Aug 10-16, 2019. Freiburg: IJCAI, 2019: 4853-4859. [28] LIU X Q, SI S, ZHU X J, et al. A unified framework for data poisoning attack to graph-based semi-supervised learning[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, Dec 8-14, 2019. New York: Curran Associates, 2019: 9780-9790. [29] BOSE A J, CIANFLONE A, HAMILTON W L. Generalizable adversarial attacks using generative models[EB/OL]. [2023-10-20]. https://arxiv.org/abs/1905.10864. [30] SUN Y W, WANG S H, TANG X F, et al. Node injection attacks on graphs via reinforcement learning[EB/OL]. [2023-10-20]. https://arxiv.org/abs/1909.06543. [31] ZHOU K, MICHALAK T P, WANIEK M, et al. Attacking similarity-based link prediction in social networks[C]//Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, Montreal, May 13-17, 2019. Richland: International Foundation for Autonomous Agents and Multiagent Systems, 2019: 305-313. [32] TAKAHASHI T. Indirect adversarial attacks via poisoning neighbors for graph convolutional networks[C]//Proceedings of the 2019 IEEE International Conference on Big Data, Los Angeles, Dec 9-12, 2019. Piscataway: IEEE, 2019: 1395-1400. [33] LI J, ZHANG H L, HAN Z C, et al. Adversarial attack on community detection by hiding individuals[C]//Proceedings of the Web Conference 2020, Taipei, China, Apr 20-24, 2020. New York: ACM, 2020: 917-927. [34] LIN X X, ZHOU C, YANG H, et al. Exploratory adversarial attacks on graph neural networks[C]//Proceedings of the 2020 IEEE International Conference on Data Mining, Sorrento, Nov 17-20, 2020. Piscataway: IEEE, 2020: 1136-1141. [35] LI J T, XIE T, CHEN L, et al. Adversarial attack on large scale graph[J]. IEEE Transactions on Knowledge and Data Engineering, 2021, 35(1): 82-95. [36] BHARDWAJ P, KELLEHER J, COSTABELLO L, et al. Poisoning knowledge graph embeddings via relation inference patterns[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Aug 1-6, 2021. New York: Curran Associates, 2021: 1875-1888. [37] CHEN J T, ZHANG J, CHEN Z, et al. Time-aware gradient attack on dynamic network link prediction[J]. IEEE Transactions on Knowledge and Data Engineering, 2021, 35(2): 2091-2102. [38] FANG J Y, WEN H X, WU J J, et al. GANI: global attacks on graph neural networks via imperceptible node injections[EB/OL]. [2023-10-20]. https://arxiv.org/abs/2210.12598. [39] NGUYEN T T, QUACH K N D, NGUYEN T T, et al. Poisoning GNN-based recommender systems with generative surrogate-based attacks[J]. ACM Transactions on Information Systems, 2023, 41(3): 1-24. [40] LIU Z H, WANG G, LUO Y, et al. What does the gradient tell when attacking the graph structure[EB/OL]. [2023-10-20]. https://arxiv.org/abs/2208.12815. [41] JIANG C, HE Y, CHAPMAN R, et al. Camouflaged poisoning attack on graph neural networks[C]//Proceedings of the 2022 International Conference on Multimedia Retrieval, Lisbon, Jun 27-30, 2022. New York: ACM, 2022: 451-461. [42] LIU Z H, LUO Y, ZANG Z L, et al. Surrogate representation learning with isometric mapping for gray-box graph adversarial attacks[C]//Proceedings of the 15th ACM International Conference on Web Search and Data Mining, Tempe, Feb 21-25, 2022. New York: ACM, 2022: 591-598. [43] ZHANG S X, CHEN H X, SUN X G, et al. Unsupervised graph poisoning attack via contrastive loss back-propagation[C]//Proceedings of the ACM Web Conference, Lyon, Apr 25-29, 2022. New York: ACM, 2022: 1322-1330. [44] LIU Z H, LUO Y, WU L R, et al. Towards reasonable budget allocation in untargeted graph structure attacks via gradient debias[C]//Advances in Neural Information Processing Systems 35, 2022: 27966-27977. [45] SHARMA A K, KUKREJA R, KHARBANDA M, et al. Node injection for class-specific network poisoning[J]. Neural Networks, 2023, 166: 236-247. [46] ZANG X, CHEN J, YUAN B. GUAP: graph universal attack through adversarial patching[EB/OL]. [2023-10-20]. https:// arxiv.org/abs/2301.01731. [47] HU C, YU R S, ZENG B Q, et al. HyperAttack: multi-gradient-guided white-box adversarial structure attack of hypergraph neural networks[EB/OL]. [2023-10-20]. https://arxiv.org/abs/2302.12407. [48] DAI H J, LI H, TIAN T H, et al. Adversarial attack on graph structured data[C]//Proceedings of the 2018 International Conference on Machine Learning, Stockholm, Jul 10-15, 2018. New York: ICML, 2018: 1115-1124. [49] MA Y, WANG S H, WU L F, et al. Attacking graph convolutional networks via rewiring[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Singapore, Aug 14-18, 2021. New York: ACM, 2021: 1161-1169. [50] WANG B, ZHOU T, LIN M, et al. Evasion attacks to graph neural networks via influence function[EB/OL]. [2023-10-20]. https://arxiv.org/abs/2009.00203. [51] CHANG H, RONG Y, XU T Y, et al. A restricted black-box adversarial framework towards attacking graph embedding models[C]//Proceedings of the AAAI Conference on Artificial Intelligence, New York, Feb 7-12, 2021. Menlo Park: AAAI, 2021: 3389-3396. [52] ZANG X, XIE Y, CHEN J, et al. Graph universal adversarial attacks: a few bad actors ruin graph learning models[C]//Proceedings of the 30th International Joint Conference on Artificial Intelligence, Montreal, Aug 19-27, 2021. New York: Curran Associates, 2021: 3328-3334. [53] TANG H T, MA G X, CHEN Y R, et al. Adversarial attack on hierarchical graph pooling neural networks[EB/OL]. [2023-10-20]. https://arxiv.org/abs/2005.11560. [54] CHEN J Y, LIN X, SHI Z Q, et al. Link prediction adversarial attack via iterative gradient attack[J]. IEEE Transactions on Computational Social Systems, 2020, 7(4): 1081-1094. [55] TAO S C, CAO Q, SHEN H W, et al. Single node injection attack against graph neural networks[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Queensland, Nov 1-5, 2021. New York: ACM, 2021: 1794-1803. [56] ZOU X, ZHENG Q K, DONG Y X, et al. TDGIA: effective injection attacks on graph neural networks[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, Aug 14-18, 2021. New York: ACM, 2021: 2461-2471. [57] ZHANG H, WU B, YANG X W, et al. Projective ranking: a transferable evasion attack method on graph neural networks[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Queens-land, Nov 1-5, 2021. New York: ACM, 2021: 3617-3621. [58] WAN X C, KENLAY H, RU B X, et al. Adversarial attacks on graph classification via Bayesian optimisation[C]//Advances in Neural Information Processing Systems 34, 2021: 6983-6996. [59] MU J M, WANG B H, LI Q, et al. A hard label black-box adversarial attack against graph neural networks[C]//Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, Nov 15-19, 2021. New York: ACM, 2021: 108-125. [60] WANG Z Y, HAO Z K, WANG Z Q, et al. Cluster attack: query-based adversarial attacks on graphs with graph-dependent priors[C]//Proceedings of the 31st International Joint Conference on Artificial Intelligence, Vienna, Jul 23-29, 2021. Freiburg: IJCAI, 2021: 768-775. [61] FINKELSHTEIN B, BASKIN C, ZHELTONOZHSKII E, et al. Single-node attacks for fooling graph neural networks[J]. Neurocomputing, 2022, 513: 1-12. [62] CHEN J Y, HUANG G H, ZHENG H B, et al. Graph-fraudster: adversarial attacks on graph neural network-based vertical federated learning[J]. IEEE Transactions on Computational Social Systems, 2022, 10(2): 492-506. [63] CHEN Y Q, YANG H, ZHANG Y G, et al. Understanding and improving graph injection attack by promoting unnoticeability[EB/OL]. [2023-10-20]. https://arxiv.org/abs/2202.08057. [64] WANG B H, PANG M, DONG Y. Turning strengths into weaknesses: a certified robustness inspired attack framework against graph neural networks[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Jun 18-22, 2023. Piscataway: IEEE, 2023: 16394-16403. [65] SHARMA K, TRIVEDI R, SRIDHAR R, et al. Temporal dynamics-aware adversarial attacks on discrete-time dynamic graph models[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, Aug 6-10, 2023. New York: ACM, 2023: 2023-2035. [66] JU M X, FAN Y J, ZHANG C X, et al. Let graph be the go board: gradient-free node injection attack for graph neural networks via reinforcement learning[C]//Proceedings of the 37th AAAI Conference on Artificial Intelligence, Washington, Feb 7-14, 2023. Menlo Park: AAAI, 2023: 4383-4390. [67] XI Z H, PANG R, JI S L, et al. Graph backdoor[C]//Proceedings of the 30th USENIX Security Symposium (USENIX Security 21), Aug 11-13, 2021: 1523-1540. [68] ZHANG Z X, JIA J Y, WANG B H, et al. Backdoor attacks to graph neural networks[C]//Proceedings of the 26th ACM Symposium on Access Control Models and Technologies, Spain, Jun 16-18, 2021. New York: ACM, 2021: 15-26. [69] SHENG Y, CHEN R, CAI G Y, et al. Backdoor attack of graph neural networks based on subgraph trigger[C]//Proceedings of the 17th EAI International Conference on Collaborative Computing: Networking, Applications and Worksharing, Oct 16-18, 2021. Berlin: Springer, 2021: 276-296. [70] XU J, WANG R, KOFFAS S, et al. More is better (mostly): on the backdoor attacks in federated graph neural networks[C]//Proceedings of the 38th Annual Computer Security Applications Conference, Austin, Dec 5-9, 2022. New York: ACM, 2022: 684-698. [71] XU J, PICEK S. Poster: clean-label backdoor attack on graph neural networks[C]//Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, Los Angeles, Nov 7-11, 2022. New York: ACM, 2022: 3491-3493. [72] YANG S Q, DOAN B G, MONTAGUE P, et al. Transferable graph backdoor attack[C]//Proceedings of the 25th International Symposium on Research in Attacks, Intrusions and Defenses, Limassol, Oct 26-28, 2022. New York: ACM, 2022: 321-332. [73] CHEN Y, YE Z L, ZHAO H X, et al. Feature-based graph backdoor attack in the node classification task[J]. International Journal of Intelligent Systems, 2023(1): 5418398. [74] ZHENG H B, XIONG H Y, CHEN J Y, et al. Motif-backdoor: rethinking the backdoor attack on graph neural networks via motifs[J]. IEEE Transactions on Computational Social Systems, 2023, 11(2): 2479-2493. [75] DAI E Y, LIN M H, ZHANG X, et al. Unnoticeable backdoor attacks on graph neural networks[C]//Proceedings of the 2023 ACM Web Conference, Austin, Apr 30-May 4, 2023. New York: ACM, 2023: 2263-2273. [76] KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[EB/OL]. [2023-10-20].https://arxiv.org/abs/1609.02907. [77] WU F, SOUZA A, ZHANG T, et al. Simplifying graph convolutional networks[C]//Proceedings of the 36th International Conference on Machine Learning, Long Beach, Jun 9-15, 2019. New York: Curran Associate, 2019: 6861-6871. [78] VELICKOVIC P, CUCURULL G, CASANOVA A, et al. Graph attention networks[C]//Proceedings of the 6th International Conference on Learning Representations, Vancouver, Apr 30-May 3, 2018. Washington: ICLR, 2018: 2920-2931. [79] HAMILTON W L, YING R, LESKOVEC J. Inductive representation learning on large graphs[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, Dec 4-9, 2017. New York: Curran Associate, 2017, 30. [80] ZHU D Y, ZHANG Z W, CUI P, et al. Robust graph convolutional networks against adversarial attacks[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Anchorage, Aug 4-8, 2019. New York: ACM, 2019: 1399-1407. [81] LUO D S, CHENG W, YU W C, et al. Learning to drop: robust graph neural network via topological denoising[C]//Proceedings of the 14th ACM International Conference on Web Search and Data Mining, Israel, Mar 8-12, 2021. New York: ACM, 2021: 779-787. [82] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612. [83] LING X, JI S L, ZOU J X, et al. DEEPSEC: a uniform platform for security analysis of deep learning model[C]//Proceedings of the 2019 IEEE Symposium on Security and Privacy, San Francisco, May 20-22, 2019. New York: Curran Associates, 2019: 673-690. [84] WONG E, SCHMIDT F, KOLTER Z. Wasserstein adversarial examples via projected sinkhorn iterations[C]//Proceedings of the 36th International Conference on Machine Learning, Long Beach, Jun 9-15, 2019. New York: Curran Associates, 2019: 6808-6817. [85] YU C J, HAN B, SHEN L, et al. Understanding robust overfitting of adversarial training and beyond[C]//Proceedings of the 39th International Conference on Machine Learning, Baltimore, Jul 17-23, 2022: 25595-25610. [86] JIN H B, CHEN J Y, ZHENG H B, et al. ROBY: evaluating the adversarial robustness of a deep model by its decision boundaries[J]. Information Sciences, 2022, 587: 97-122. [87] 陈思宏, 沈浩靖, 王冉, 等. 预测不确定性与对抗鲁棒性的关系研究[J]. 软件学报, 2022, 33(2): 524-538. CHEN S H, SHEN H J, WANG R, et al. Relationship between prediction uncertainty and adversarial robustness[J]. Journal of Software, 2022, 33(2): 524-538. [88] WENG T W, ZHANG H, CHEN P Y, et al. Evaluating the robustness of neural networks: an extreme value theory approach[C]//Proceedings of the 6th International Conference on Learning Representations, Vancouver, Apr 30-May 3. Washington: ICLR, 2018: 1-18. [89] JIN H W, SHI Z, PERURI V J S A, et al. Certified robustness of graph convolution networks for graph classification under topological attacks[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Dec 6-12, 2020. New York: Curran Associates, 2020: 8463-8474. [90] XU J R, CHEN J R, YOU S Q, et al. Robustness of deep learning models on graphs: a survey[J]. AI Open, 2021, 2: 69-78. [91] MCCALLUM A K, NIGAM K, RENNIE J, et al. Automating the construction of internet portals with machine learning[J]. Information Retrieval, 2000, 3: 127-163. [92] GILES C L, BOLLACKER K D, LAWRENCE S. CiteSeer: an automatic citation indexing system[C]//Proceedings of the 3rd ACM Conference on Digital Libraries, Pittsburgh, Jun 23-26, 1998. New York: ACM, 1998: 89-98. [93] MCCALLUM A K, NIGAM K, RENNIE J, et al. Automating the construction of internet portals with machine learning[J]. Information Retrieval, 2000, 3: 127-163. [94] LIN L, BLASER E, WANG H N. Graph structural attack by perturbing spectral distance[C]//Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, Aug 14-18, 2022. New York: ACM, 2022: 989-998. [95] JU M X, FAN Y J, YE Y F, et al. Black-box node injection attack for graph neural networks[EB/OL]. [2023-10-20]. https:// arxiv.org/abs/2202.09389. [96] SUN Y W, WANG S H, HSIEH T Y, et al. MEGAN: a generative adversarial network for multi-view network embedding[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, Aug 10-16, 2019, Menlo Park: AAAI, 2019: 3527-3533. |
[1] | 祝义, 居程程, 郝国生. 基于PathSim的MOOCs知识概念推荐模型[J]. 计算机科学与探索, 2024, 18(8): 2049-2064. |
[2] | 温雯, 邓峰颖, 郝志峰, 蔡瑞初, 梁方宇. 时空邻域感知的时序兴趣点推荐[J]. 计算机科学与探索, 2024, 18(7): 1865-1878. |
[3] | 刘源, 董永权, 陈成, 贾瑞, 印婵. 融合热点与长短期兴趣的图神经网络课程推荐模型[J]. 计算机科学与探索, 2024, 18(6): 1600-1612. |
[4] | 翟文硕, 赵翔, 陈东. 基于可逆图扩散的网络传播溯源方法研究[J]. 计算机科学与探索, 2024, 18(5): 1348-1356. |
[5] | 钱忠胜, 张丁, 李端明, 王亚惠, 姚昌森, 俞情媛. 结合用户共同意图及社交关系的群组推荐方法[J]. 计算机科学与探索, 2024, 18(5): 1368-1382. |
[6] | 章淯淞, 夏鸿斌, 刘渊. 自监督混合图神经网络的会话推荐模型[J]. 计算机科学与探索, 2024, 18(4): 1021-1031. |
[7] | 吴文政, 卢先领. 融合物品转换关系和时序信息的会话推荐算法[J]. 计算机科学与探索, 2024, 18(3): 768-779. |
[8] | 居程程, 祝义. 采用局部子图嵌入的MOOCs知识概念推荐模型[J]. 计算机科学与探索, 2024, 18(1): 189-204. |
[9] | 顾军华, 李宁宁, 王鑫鑫, 张素琪. 将行为依赖融入多任务学习的个性化推荐模型[J]. 计算机科学与探索, 2024, 18(1): 231-243. |
[10] | 延照耀, 丁苍峰, 马乐荣, 曹璐, 游浩. 面向图神经网络的知识图谱嵌入研究进展[J]. 计算机科学与探索, 2023, 17(8): 1793-1813. |
[11] | 陈娜, 黄金诚, 李平. 结合对比学习的图神经网络防御方法[J]. 计算机科学与探索, 2023, 17(8): 1949-1960. |
[12] | 邬锦琛, 杨兴耀, 于炯, 李梓杨, 黄擅杭, 孙鑫杰. 双通道异构图神经网络序列推荐算法[J]. 计算机科学与探索, 2023, 17(6): 1473-1486. |
[13] | 韩虎, 郝俊, 张千锟, 孟甜甜. 知识增强的交互注意力方面级情感分析模型[J]. 计算机科学与探索, 2023, 17(3): 709-718. |
[14] | 马力, 姚伟凡. 结合关系路径与有向子图推理的链接预测方法[J]. 计算机科学与探索, 2023, 17(2): 478-488. |
[15] | 许鑫冉, 王腾宇, 鲁才. 图神经网络在知识图谱构建与应用中的研究进展[J]. 计算机科学与探索, 2023, 17(10): 2278-2299. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||