计算机科学与探索 ›› 2023, Vol. 17 ›› Issue (2): 489-498.DOI: 10.3778/j.issn.1673-9418.2105008

• 大数据技术 • 上一篇    下一篇

融合时间上下文与特征级信息的推荐算法

沈义峰,金辰曦,王瑶,张家想,卢先领   

  1. 1. 江南大学 轻工过程先进控制教育部重点实验室,江苏 无锡 214122
    2. 江南大学 物联网工程学院,江苏 无锡 214122
  • 出版日期:2023-02-01 发布日期:2023-02-01

Integrating Time Context and Feature-Level Information for Recommendation Algorithm

SHEN Yifeng, JIN Chenxi, WANG Yao, ZHANG Jiaxiang, LU Xianling   

  1. 1. Key Laboratory for Advanced Process Control for Light Industry of the Ministry of Education, Jiangnan University, Wuxi, Jiangsu 214122, China
    2. School of Internet of Things Engineering, Jiangnan University, Wuxi, Jiangsu 214122, China
  • Online:2023-02-01 Published:2023-02-01

摘要: 针对基于自注意力机制的序列推荐模型忽略了各类辅助信息,导致模型不能利用它们捕捉多层次序列关系模式等问题,提出了一种融合时间上下文和特征级信息的推荐算法(ITFR)。首先将物品表示与其每一个属性表示连接起来输入到一个注意力网络,经过注意力加权后得到一种基于属性的物品表示。然后ITFR应用感知时间间隔的自注意力区块和基于物品-属性的自注意力区块分别捕捉物品与交互序列时间间隔之间的关系模式和物品与属性之间的隐式关系。最后将两个自注意力区块的输出表示连接起来,并将其作为联合输出表示输入到全连接层用于下一个物品的推荐。在两个公开数据集上进行实验,采用命中率(HR)和归一化折损累计增益(NDCG)两种性能指标进行评估。在Beauty数据集中,相较于最优的基线方法,HR@10和NDCG@10分别提升了4.6%和5.1%;在MovieLens-1M数据集中,HR@10和NDCG@10分别提升了1.7%和1.5%。实验结果表明,融入辅助信息增强序列表示的方法可以提升推荐性能。

关键词: 推荐算法, 序列推荐, 自注意力机制, 时间信息

Abstract: Aiming at the problem that the sequence recommendation models based on the self-attention mechanism ignore all kinds of auxiliary information, which makes the model unable to use them to capture multi-level sequence relationship patterns, a recommendation algorithm integrating time context and feature-level information is proposed (ITFR). Firstly, the item representation is connected with each of its attribute representations and input into an attention network. After attention is weighted, an attribute-based item representation is obtained. Then, ITFR applies the self-attention block of the perception time interval and the self-attention block based on the item-attribute to capture the relationship pattern between the item and the interaction sequence time interval and the implicit relationship between the item and the attribute, respectively. Finally, the output representations of the two self-attention blocks are connected, and the joint output representation is input to the fully connected layer for the recommendation of the next item. Experiments are conducted on two public datasets, and two performance indicators of hit rate (HR) and normalized discounted cumulative gain (NDCG) are used for evaluation. In the Beauty dataset, compared with the optimal baseline method, HR@10 and NDCG@10 are increased by 4.6% and 5.1%, respectively. In the Movie-1M dataset, HR@10 and NDCG@10 are increased by 1.7% and 1.5%, respectively. Experimental results show that the method of incorporating auxiliary information to enhance sequence representation can improve recommendation performance.

Key words: recommendation algorithm, sequence recommendation, self-attention mechanism, time information