计算机科学与探索 ›› 2023, Vol. 17 ›› Issue (11): 2734-2742.DOI: 10.3778/j.issn.1673-9418.2208071

• 人工智能·模式识别 • 上一篇    下一篇

联合注意力与卷积网络的知识超图链接预测

庞俊,徐浩,秦宏超,林晓丽,刘小琪,王国仁   

  1. 1. 武汉科技大学 计算机科学与技术学院,武汉 430070
    2. 智能信息处理与实时工业系统湖北省重点实验室,武汉 430070
    3. 北京理工大学 计算机学院,北京 100081
  • 出版日期:2023-11-01 发布日期:2023-11-01

Link Prediction in Knowledge Hypergraph Combining Attention and Convolution Network

PANG Jun, XU Hao, QIN Hongchao, LIN Xiaoli, LIU Xiaoqi, WANG Guoren   

  1. 1. School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430070, China
    2. Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan 430070, China
    3. School of Computer, Beijing Institute of Technology, Beijing 100081, China
  • Online:2023-11-01 Published:2023-11-01

摘要: 知识超图(KHG)是超图结构的知识图谱。知识超图链接预测旨在通过已知实体和关系预测缺失的关系。然而,现有最优的基于嵌入模型的知识超图链接预测方法HypE虽然实体嵌入时考虑了位置信息,但关系嵌入时忽略了不同实体的贡献有差异,且实体嵌入蕴含信息不够充足。关系嵌入考虑实体贡献度并补足实体嵌入的信息含量,可以较大地提升模型的预测能力。因此,提出了一种基于注意力与卷积网络的链接预测方法(LPACN),采用改进的注意力机制将实体的注意力信息融入到关系嵌入中;并且将同元组内相邻实体个数信息融入卷积网络,进一步补足了实体卷积向量的信息含量。针对LPACN的梯度消失问题,将改进的残差网络(ResidualNet)融入LPACN,并引入多层感知器(MLP)提升了模型的非线性学习能力,得到LPACN的改进算法LPACN+。真实数据集上的大量实验验证了LPACN的预测性能均优于Baseline方法。

关键词: 知识超图, 链接预测, 嵌入模型, 注意力机制

Abstract: Knowledge hypergraphs (KHG) are knowledge graph of hypergraph structure. KHG link prediction aims to predict the missing relations through the known entities and relations. However, HypE, the existing optimal KHG link prediction method based on embedding model considers the location information when embedding entities, but ignores the differences in the contributions of different entities when embedding relations. And the information of the entity convolution vector is insufficient. Relation embeddings consider the entity contribution and supply the information of entity embedding, which can greatly improve the prediction ability of model. Therefore, link prediction based on attention and convolution network (LPACN) is proposed. The improved attention mechanism is applied to merging entity attention information into relation embeddings. And the number information of neighboring entities in the same tuple is integrated into the convolution network, which further supplies the information of entity convolution embedding. For the gradient vanishing problem of LPACN, the improved ResidualNet is integrated into LPACN, and the multilayer perceptron (MLP) is used to improve the nonlinear learning ability of model. The improved algorithm LPACN+ is obtained. Extensive experiments on real datasets validate that LPACN is better than the Baselines.

Key words: knowledge hypergraph, link prediction, embedding model, attention mechanism