
计算机科学与探索 ›› 2025, Vol. 19 ›› Issue (9): 2302-2318.DOI: 10.3778/j.issn.1673-9418.2502028
昂格鲁玛,王斯日古楞,斯琴图
出版日期:2025-09-01
发布日期:2025-09-01
Anggeluma, WANG Siriguleng, SI Qintu
Online:2025-09-01
Published:2025-09-01
摘要: 知识图谱已在众多领域得到广泛应用,显著推进了人工智能相关任务的发展。然而,知识图谱在实际应用中仍面临知识不完备的挑战,这一挑战严重限制了知识图谱在下游任务中的应用效果。知识图谱补全任务能够预测知识图谱中缺失的连接,以解决知识不完备的问题。系统梳理了知识图谱及其补全技术的研究背景,明确了其在人工智能与自然语言处理等领域的关键作用。根据信息来源的不同,将现有补全方法划分为基于结构信息、基于文本信息以及融合结构与文本信息等类型,并对各类方法的代表性成果进行了介绍、优缺点比较及适用场景的归纳,揭示了当前技术的发展脉络与演进趋势。关注多语言知识图谱补全的研究进展,探讨了跨语言实体对齐等关键技术,强调了跨语言知识共享与统一建模的重要性。分析了知识图谱补全在知识融合、知识挖掘等方面的挑战,并展望了未来可能的研究趋势。
昂格鲁玛, 王斯日古楞, 斯琴图. 知识图谱补全研究综述[J]. 计算机科学与探索, 2025, 19(9): 2302-2318.
Anggeluma, WANG Siriguleng, SI Qintu. Overview of Research on Knowledge Graph Completion[J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(9): 2302-2318.
| [1] SUCHANEK F M, KASNECI G, WEIKUM G. Yago: a core of semantic knowledge[C]//Proceedings of the 16th International Conference on World Wide Web. New York: ACM, 2007: 697-706. [2] BOLLACKER K, EVANS C, PARITOSH P, et al. Freebase: a collaboratively created graph database for structuring human knowledge[C]//Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data. New York: ACM, 2008: 1247-1250. [3] AUER S, BIZER C, KOBILAROV G, et al. DBpedia: a nucleus for a web of open data[C]//Proceedings of the 6th International Semantic Web Conference, the 2nd Asian Semantic Web Conference. Berlin, Heidelberg: Springer, 2007: 722-735. [4] VRANDE?I? D, KR?TZSCH M. Wikidata[J]. Communications of the ACM, 2014, 57(10): 78-85. [5] WANG Z, LI J, WANG Z, et al. XLore: a large-scale English-Chinese bilingual knowledge graph[C]//Proceedings of the ISWC 2013 Posters & Demonstrations Track, 2013: 121-124. [6] CHEN H J, HU N, QI G L, et al. OpenKG chain: a blockchain infrastructure for open knowledge graphs[J]. Data Intelligence, 2021, 3(2): 205-227. [7] XU B, LIANG J Q, XIE C H, et al. CN-DBpedia2: an extraction and verification framework for enriching Chinese encyclopedia knowledge base[J]. Data Intelligence, 2019, 1(3): 271-288. [8] 杜雪盈, 刘名威, 沈立炜, 等. 面向链接预测的知识图谱表示学习方法综述[J]. 软件学报, 2024, 35(1): 87-117. DU X Y, LIU M W, SHEN L W, et al. Survey on representation learning methods of knowledge graph for link prediction[J]. Journal of Software, 2024, 35(1): 87-117. [9] 马恒志, 钱育蓉, 冷洪勇, 等. 知识图谱嵌入研究进展综述[J]. 计算机工程, 2025, 51(2): 18-34. MA H Z, QIAN Y R, LENG H Y, et al. Review of research progress on knowledge graph embedding[J]. Computer Engineering, 2025, 51(2): 18-34. [10] 于强, 朱颖, 冷根, 等. 面向知识图谱的知识补全方法综述[J]. 微电子学与计算机, 2025, 42(6): 1-14. YU Q, ZHU Y, LENG G, et al. Survey of knowledge completion methods for knowledge graphs[J]. Microelectronics & Computer, 2025, 42(6): 1-14. [11] 吴越, 孙海春. 基于图神经网络的知识图谱补全研究综述[J]. 数据分析与知识发现, 2024, 8(3): 10-28. WU Y, SUN H C. An overview of research on knowledge graph completion based on graph neural network[J]. Data Analysis and Knowledge Discovery, 2024, 8(3): 10-28. [12] BORDES A, USUNIER N, GARCIA-DURAN A, et al. Translating embeddings for modeling multi-relational data[C]//Advances?in?Neural?Information?Processing?Systems?26, 2013: 2787-2795. [13] YAO L, MAO C, LUO Y. KG-BERT: BERT for knowledge graph completion[EB/OL]. [2024-12-20]. https://arxiv.org/abs/1909.03193. [14] WANG Z, ZHANG J W, FENG J L, et al. Knowledge graph embedding by translating on hyperplanes[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2014, 28(1): 1112-1119. [15] LIN Y K, LIU Z Y, SUN M S, et al. Learning entity and relation embeddings for knowledge graph completion[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2015, 29(1): 2181-2187. [16] JI G L, HE S Z, XU L H, et al. Knowledge graph embedding via dynamic mapping matrix[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2015: 687-696. [17] XIAO H, HUANG M, HAO Y, et al. TransA: an adaptive approach for knowledge graph embedding[EB/OL]. [2024-12-20]. https://arxiv.org/abs/1509.05490. [18] YANG B, YIH W, HE X D, et al. Embedding entities and relations for learning and inference in knowledge bases[EB/OL]. [2024-12-20]. https://arxiv.org/abs/1412.6575. [19] NICKEL M, TRESP V, KRIEGEL H P. A three-way model for collective learning on multi-relational data[C]//Proceedings of the 28th International Conference on Machine Learning, 2011: 809-816. [20] TROUILLON T, WELBL J, RIEDEL S, et al. Complex embeddings for simple link prediction[C]//Proceedings of the 33rd International Conference on Machine Learning, 2016: 2071-2080. [21] DETTMERS T, MINERVINI P, STENETORP P, et al. Convolutional 2D knowledge graph embeddings[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2018, 32(1): 1811-1818. [22] NGUYEN D Q, NGUYEN T D, NGUYEN D Q, et al. A novel embedding model for knowledge base completion based on convolutional neural network[EB/OL]. [2024-12-20]. https://arxiv.org/abs/1712.02121. [23] PENG H L, WU Y. A dynamic convolutional network-based model for knowledge graph completion[J]. Information, 2022, 13(3): 133. [24] SHANG C, TANG Y, HUANG J, et al. End-to-end structure-aware convolutional networks for knowledge base completion[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1): 3060-3067. [25] VASHISHTH S, SANYAL S, NITIN V, et al. Composition-based multi-relational graph convolutional networks[EB/OL]. [2024-12-20]. https://arxiv.org/abs/1911.03082. [26] DEGRAEVE V, VANDEWIELE G, ONGENAE F, et al. R-GCN: the R could stand for random[EB/OL]. [2024-12-20]. https://arxiv.org/abs/2203.02424. [27] NATHANI D, CHAUHAN J, SHARMA C, et al. Learning attention-based embeddings for relation prediction in knowl-edge graphs[EB/OL]. [2024-12-20]. https://arxiv.org/abs/1906. 01195. [28] 马浩凯, 祁云嵩, 吴宇斌. 解纠缠邻域信息聚合的知识图谱补全方法[J]. 计算机应用研究, 2024, 41(3): 772-778. MA H K, QI Y S, WU Y B. Knowledge graph completion method for disentangled neighborhood information aggregation[J]. Application Research of Computers, 2024, 41(3): 772-778. [29] ZHANG S C, ZHANG J N, SONG X, et al. PaGE-Link: path-based graph neural network explanation for heterogeneous link prediction[C]//Proceedings of the ACM Web Conference 2023. New York: ACM, 2023: 3784-3793. [30] CHANG H, YE J N, LOPEZ-AVILA A, et al. Path-based explanation for knowledge graph completion[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2024: 231-242. [31] 翟社平, 杨晴, 黄妍, 等. 融合有向关系与关系路径的层次注意力的知识图谱补全[J]. 计算机应用, 2025, 45(4): 1148-1156. ZHAI S P, YANG Q, HUANG Y, et al. Knowledge graph completion with hierarchical attention fusing directed relationships and relational paths[J]. Journal of Computer Applications, 2025, 45(4): 1148-1156. [32] 那宇嘉, 谢珺, 杨海洋, 等. 融合上下文的知识图谱补全方法[J]. 山东大学学报(理学版), 2023, 58(9): 71-80. NA Y J, XIE J, YANG H Y, et al. Context fusion-based knowledge graph completion[J]. Journal of Shandong University (Natural Science), 2023, 58(9): 71-80. [33] SCARSELLI F, GORI M, TSOI A C, et al. The graph neural network model[J]. IEEE Transactions on Neural Networks, 2009, 20(1): 61-80. [34] KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[EB/OL]. [2024-12-20].https://arxiv.org/abs/1609.02907. [35] VELI?KOVI? P, CUCURULL G, CASANOVA A, et al. Graph attention networks[EB/OL]. [2024-12-20]. https://arxiv.org/abs/1710.10903. [36] WANG L, ZHAO W, WEI Z Y, et al. SimKGC: simple contrastive knowledge graph completion with pre-trained language models[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 4281-4294. [37] PENG B, LIANG S, ISLAM M, et al. Bi-Link: bridging inductive link predictions from text via contrastive learning of transformers and prompts[EB/OL]. [2024-12-20]. https://arxiv.org/abs/2210.14463. [38] XIE X, ZHANG N Y, LI Z B, et al. From discrimination to generation: knowledge graph completion with generative transformer[C]//Companion of the Web Conference 2022. New York: ACM, 2022: 162-165. [39] JIANG P, AGARWAL S, JIN B, et al. Text-augmented open knowledge graph completion via pre-trained language models[EB/OL]. [2024-11-12]. https://arxiv.org/abs/2305.15597. [40] ZHU Y Q, WANG X H, CHEN J, et al. LLMs for knowledge graph construction and reasoning: recent capabilities and future opportunities[J]. World Wide Web, 2024, 27(5): 58. [41] WEI Y, HUANG Q, KWOK J T, et al. KICGPT: large language model with knowledge in context for knowledge graph completion[EB/OL]. [2024-11-12]. https://arxiv.org/abs/2402.02389. [42] YANG R, ZHU J H, MAN J P, et al. Enhancing text-based knowledge graph completion with zero-shot large language models: a focus on semantic enhancement[J]. Knowledge-Based Systems, 2024, 300: 112155. [43] XU D, ZHANG Z, LIN Z, et al. Multi-perspective improvement of knowledge graph completion with large language models[EB/OL]. [2024-11-12]. https://arxiv.org/abs/2403.01972. [44] YAO L, PENG J Z, MAO C S, et al. Exploring large language models for knowledge graph completion[C]//Proceedings of the 2025 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2025: 1-5. [45] LIU Y, TIAN X B, SUN Z Q, et al. Finetuning generative large language models with discrimination instructions for knowledge graph completion[C]//Proceedings of the 23rd International Semantic Web Conference. Cham: Springer, 2024: 199-217. [46] ACHIAM J, ADLER S, AGARWAL S, et al. GPT-4 technical report[EB/OL]. [2024-11-12]. https://arxiv.org/abs/2303.08774. [47] AN J F, DING W Z, LIN C. ChatGPT: tackle the growing carbon footprint of generative AI[J]. Nature, 2023, 615(7953): 586. [48] YU D, ZHU C, YANG Y, et al. JAKET: joint pre-training of knowledge graph and language understanding[EB/OL]. [2024-11-12]. https://arxiv.org/abs/2010.00796. [49] 韩道军, 李云松, 张俊涛, 等. 一种融合实体描述和拓扑结构的知识图谱补全方法[J]. 计算机科学, 2025, 52(5): 260-269. HAN D J, LI Y S, ZHANG J T, et al. Knowledge graph completion method fusing entity descriptions and topological structure[J]. Computer Science, 2025, 52(5): 260-269. [50] 杨荣泰, 邵玉斌, 杜庆治, 等. 基于子图结构语义增强的少样本知识图谱补全[J]. 北京邮电大学学报, 2024, 47(4): 71-76. YANG R T, SHAO Y B, DU Q Z, et al. Few-shot knowledge graph completion based on subgraph structure semantic enhancement[J]. Journal of Beijing University of Posts and Telecommunications, 2024, 47(4): 71-76. [51] GUO J T, ZHANG C X, LI L X, et al. A unified joint approach with topological context learning and rule augmentation for knowledge graph completion[C]//Findings of the Association for Computational Linguistics: ACL 2024. Stroudsburg: ACL, 2024: 13686-13696. [52] YU C M, ZHANG Z G, AN L, et al. A knowledge graph completion model integrating entity description and network structure[J]. Aslib Journal of Information Management, 2023, 75(3): 500-522. [53] CHEN C, WANG Y, SUN A, et al. Dipping PLMs sauce: bridging structure and text for effective knowledge graph completion via conditional soft prompting[EB/OL]. [2024-12-10]. https://arxiv.org/abs/2307.01709. [54] WANG J, HUANG W, SHI Q, et al. Knowledge prompting in pre-trained language model for natural language understanding[EB/OL]. [2024-12-10]. https://arxiv.org/abs/2210.08536. [55] ZHENG Z Y, DONG Y S, WANG S, et al. KG-CF: knowledge graph completion with context filtering under the guidance of large language models[C]//Proceedings of the 2024 IEEE International Conference on Big Data. Piscataway: IEEE, 2024: 805-810. [56] LUO L, JU J, XIONG B, et al. ChatRule: mining logical rules with large language models for knowledge graph reasoning[EB/OL]. [2024-12-10]. https://arxiv.org/abs/2309.01538. [57] ZHANG Y C, CHEN Z, GUO L B, et al. Making large language models perform better in knowledge graph completion[C]//Proceedings of the 32nd ACM International Conference on Multimedia. New York: ACM, 2024: 233-242. [58] SHEN J, WANG C, GONG L, et al. Joint language semantic and structure embedding for knowledge graph completion[EB/OL]. [2024-12-10]. https://arxiv.org/abs/2209.08721. [59] LIN Q K, MAO R, LIU J, et al. Fusing topology contexts and logical rules in language models for knowledge graph completion[J]. Information Fusion, 2023, 90: 253-264. [60] SHU D, CHEN T, JIN M, et al. Knowledge graph large language model (KG-LLM) for link prediction[EB/OL]. [2024-12-10]. https://arxiv.org/abs/2403.07311. [61] HE T, LIU M, CAO Y X, et al. Exploring & exploiting high-order graph structure for sparse knowledge graph completion[J]. Frontiers of Computer Science, 2024, 19(2): 192306. [62] CHAKRABARTI S, SINGH H, LOHIYA S, et al. Joint completion and alignment of multilingual knowledge graphs[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2022: 11922-11938. [63] TANG R C, ZHAO Y, ZONG C Q, et al. Multilingual knowledge graph completion with language-sensitive multi-graph attention[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 10508-10519. [64] TONG V, NGUYEN D Q, HUYNH T T, et al. Joint multilingual knowledge graph completion and alignment[EB/OL]. [2024-12-10]. https://arxiv.org/abs/2210.08922. [65] CHEN X, CHEN M, FAN C, et al. Multilingual knowledge graph completion via ensemble knowledge transfer[EB/OL]. [2024-12-10]. https://arxiv.org/abs/2010.03158. [66] CHEN S, ZHANG Q, DONG J, et al. Entity alignment with noisy annotations from large language models[EB/OL]. [2024-12-10]. https://arxiv.org/abs/2405.16806. [67] SONG R, HE S, GAO S, et al. Multilingual knowledge graph completion from pretrained language models with knowledge constraints[EB/OL]. [2024-12-10]. https://arxiv.org/abs/2406.18085. [68] CHEN M, TIAN Y, YANG M, et al. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment[EB/OL]. [2024-12-10]. https://arxiv.org/abs/1611.03954. [69] TAMM Y M, DAMDINOV R, VASILEV A. Quality metrics in recommender systems: do we calculate metrics consistently?[C]//Proceedings of the 15th ACM Conference on Recommender Systems. New York: ACM, 2021: 708-713. [70] MILLER G A. WordNet[J]. Communications of the ACM, 1995, 38(11): 39-41. [71] TRAN H N, TAKASU A. MEIM: multi-partition embedding interaction beyond block term format for efficient and expressive link prediction[C]//Proceedings of the 31st International Joint Conference on Artificial Intelligence, 2022: 2262-2269. |
| [1] | 景丽, 郑公浩, 李晓涵, 蔚梦媛. 基于跨模态图掩码和特征增强的推荐方法[J]. 计算机科学与探索, 2025, 19(9): 2470-2478. |
| [2] | 陈梓彦, 袁得嵛, 孙泽宇, 程佳琳. 基于强化学习的社交网络影响力最小化模型GCNNs-DDQN[J]. 计算机科学与探索, 2025, 19(9): 2458-2469. |
| [3] | 田崇腾, 刘静, 王晓燕, 李明. 大语言模型GPT在医疗文本中的应用综述[J]. 计算机科学与探索, 2025, 19(8): 2043-2056. |
| [4] | 王劲滔, 孟琪翔, 高志霖, 卜凡亮. 基于大语言模型指令微调的案件信息要素抽取方法研究[J]. 计算机科学与探索, 2025, 19(8): 2161-2173. |
| [5] | 夏江镧, 李艳玲, 葛凤培. 基于大语言模型的实体关系抽取综述[J]. 计算机科学与探索, 2025, 19(7): 1681-1698. |
| [6] | 时振普, 吕潇, 董彦如, 刘静, 王晓燕. 医学领域多模态知识图谱融合技术发展现状研究[J]. 计算机科学与探索, 2025, 19(7): 1729-1746. |
| [7] | 刘广腾, 王峰, 吴中博. 下一个兴趣点推荐算法综述[J]. 计算机科学与探索, 2025, 19(7): 1747-1770. |
| [8] | 冯勇, 栾超杰, 王嵘冰, 徐红艳, 张永刚. 聚合全局交互与局部交互的知识图谱补全[J]. 计算机科学与探索, 2025, 19(7): 1909-1917. |
| [9] | 崔健, 汪永伟, 李飞扬, 李强, 苏北荣, 张小健. 结合知识蒸馏的中文文本摘要生成方法[J]. 计算机科学与探索, 2025, 19(7): 1899-1908. |
| [10] | 沙潇, 王建文, 丁建川, 徐笑然. 融合层级知识图谱嵌入与注意力机制的推荐方法[J]. 计算机科学与探索, 2025, 19(6): 1508-1521. |
| [11] | 许德龙, 林民, 王玉荣, 张树钧. 基于大语言模型的NLP数据增强方法综述[J]. 计算机科学与探索, 2025, 19(6): 1395-1413. |
| [12] | 张欣, 孙靖超. 基于大语言模型的虚假信息检测框架综述[J]. 计算机科学与探索, 2025, 19(6): 1414-1436. |
| [13] | 周家旋, 柳先辉, 赵晓东, 侯文龙, 赵卫东. 融合自适应超图的自监督知识感知推荐模型[J]. 计算机科学与探索, 2025, 19(5): 1217-1229. |
| [14] | 何静, 沈阳, 谢润锋. 大语言模型幻觉现象的分类识别与优化研究[J]. 计算机科学与探索, 2025, 19(5): 1295-1301. |
| [15] | 李居昊, 石磊, 丁锰, 雷永升, 赵东越, 陈泷. 基于大语言模型的社交媒体文本立场检测[J]. 计算机科学与探索, 2025, 19(5): 1302-1312. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||