[1] RAJABI E, ETMINANI K. Knowledge-graph-based explainable AI: a systematic review[J]. Journal of Information Science, 2024, 50(4): 1019-1029.
[2] YANG L Y, CHEN H Y, LI Z, et al. Give us the facts: enhancing large language models with knowledge graphs for fact-aware language modeling[J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36(7): 3091-3110.
[3] 张鹤译, 王鑫, 韩立帆, 等. 大语言模型融合知识图谱的问答系统研究[J]. 计算机科学与探索, 2023, 17(10): 2377-2388.
ZHANG H Y, WANG X, HAN L F, et al. Research on question answering system on joint of knowledge graph and large language models[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(10): 2377-2388.
[4] ZHANG Y C, CHEN Z, GUO L B, et al. Making large language models perform better in knowledge graph completion[C]//Proceedings of the 32nd ACM International Conference on Multimedia. New York: ACM, 2024: 233-242.
[5] CHEN Z, ZHANG Y C, FANG Y, et al. Knowledge graphs meet multi-modal learning: a comprehensive survey[EB/OL]. [2024-10-20]. https://arxiv.org/abs/2402.05391.
[6] BORDES A, USUNIER N, GARCIA-DURáN A, et al. Translating embeddings for modeling multi-relational data[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems, 2013: 2787-2795.
[7] TROUILLON T, WELBL J, RIEDEL S, et al. Complex embeddings for simple link prediction[C]//Proceedings of the 33rd International Conference on Machine Learning, 2016: 2071-2080.
[8] DETTMERS T, MINERVINI P, STENETORP P, et al. Convolutional 2D knowledge graph embeddings[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2018, 32(1): 1811-1818.
[9] CHEN S J, YANG B, ZHAO C X. Simple and effective meta relational learning for few-shot knowledge graph completion[J]. Optimization and Engineering, 2025, 26(2): 869-889.
[10] XIE S R, LIU R S, WANG X Z, et al. Hierarchical knowledge-enhancement framework for multi-hop knowledge graph reasoning[J]. Neurocomputing, 2024, 588: 127673.
[11] DAS R, DHULIAWALA S, ZAHEER M, et al. Go for a walk and arrive at the answer: reasoning over paths in knowledge bases using reinforcement learning[EB/OL]. [2024-10-20]. https://arxiv.org/abs/1711.05851.
[12] LIN XV, SOCHER R, XIONG C. Multi-hop knowledge graph reasoning with reward shaping[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 3243-3253.
[13] WANG Z K, LI L J, ZENG D. SRGCN: graph-based multi-hop reasoning on knowledge graphs[J]. Neurocomputing, 2021, 454: 280-290.
[14] LEI D R, JIANG G R, GU X T, et al. Learning collaborative agents with rule guidance for knowledge graph reasoning[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 8541-8547.
[15] WANG Y S, LIU Y F, ZHANG H H, et al. Leveraging lexical semantic information for learning concept-based multiple embedding representations for knowledge graph completion[C]//Proceedings of the 3rd International Joint Conference on Web and Big Data. Cham: Springer, 2019: 382-397.
[16] YANG B S, YIH W T, HE X D, et al. Embedding entities and relations for learning and inference in knowledge bases[EB/OL]. [2024-10-21]. https://arxiv.org/abs/1412.6575.
[17] LAO N, MITCHELL T, COHEN W. Random walk inference and learning in a large scale knowledge base[C]//Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2011: 529-539.
[18] YANG F, YANG Z, COHEN W W. Differentiable learning of logical rules for knowledge base reasoning[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 2316-2325.
[19] XIONG W H, HOANG T, WANG W Y. DeepPath: a reinforcement learning method for knowledge graph reasoning[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2017: 564-573.
[20] LI S Y, WANG H, PAN R, et al. MemoryPath: a deep reinforcement learning framework for incorporating memory component into knowledge graph reasoning[J]. Neurocomputing, 2021, 419: 273-286.
[21] WANG Q, HAO Y S, CAO J. ADRL: an attention-based deep reinforcement learning framework for knowledge graph reasoning[J]. Knowledge-Based Systems, 2020, 197: 105910.
[22] WAN G J, PAN S R, GONG C, et al. Reasoning like human: hierarchical reinforcement learning for knowledge graph reasoning[C]//Proceedings of the 29th International Joint Conference on Artificial Intelligence. Palo Alto: AAAI, 2020: 1926-1932.
[23] LIN Q K, LIU J, PAN Y D, et al. Rule-enhanced iterative complementation for knowledge graph reasoning[J]. Information Sciences, 2021, 575: 66-79.
[24] PARK J, WOO S, LEE J Y, et al. BAM: bottleneck attention module[EB/OL]. [2024-10-21]. https://arxiv.org/abs/1807.06514.
[25] WU C F, LIU J L, WANG X J, et al. Differential networks for visual question answering[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1): 8997-9004.
[26] CHEN S X, LIU X D, GAO J F, et al. HittER: hierarchical transformers for knowledge graph embeddings[EB/OL]. [2024-10-21]. https://arxiv.org/abs/2008.12813.
[27] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, 2017: 5998-600.
[28] SRIVASTAVA N, HINTON G E, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15(1): 1929-1958. |