Journal of Frontiers of Computer Science and Technology ›› 2022, Vol. 16 ›› Issue (7): 1603-1610.DOI: 10.3778/j.issn.1673-9418.2104038

• A.pngicial Intelligence • Previous Articles     Next Articles

Long Text Generation Adversarial Network Model with Self-Attention Mechanism

XIA Hongbin1,2, XIAO Yifei1,+(), LIU Yuan1,2   

  1. 1. School of A.pngicial Intelligence and Computer, Jiangnan University, Wuxi, Jiangsu 214122, China
    2. Jiangsu Key Laboratory of Media Design and Software Technology, Wuxi, Jiangsu 214122, China
  • Received:2021-04-12 Revised:2021-07-08 Online:2022-07-01 Published:2021-07-20
  • Supported by:
    the National Natural Science Foundation of China(61972182)

融合自注意力机制的长文本生成对抗网络模型

夏鸿斌1,2, 肖奕飞1,+(), 刘渊1,2   

  1. 1.江南大学 人工智能与计算机学院,江苏 无锡 214122
    2.江苏省媒体设计与软件技术重点实验室,江苏 无锡 214122
  • 作者简介:夏鸿斌(1972—),男,江苏无锡人,博士,副教授,CCF会员,主要研究方向为个性化推荐、自然语言处理、网络优化。
    XIA Hongbin, born in 1972, Ph.D., associate professor, member of CCF. His research interests include personalized recommendation, natural language processing and network optimization.
    肖奕飞(1996—),男,湖北黄石人,硕士研究生,主要研究方向为自然语言处理。
    XIAO Yifei, born in 1996, M.S. candidate. His reaserch interest is natural language processing.
    刘渊(1967—),男,江苏无锡人,教授,CCF高级会员,主要研究方向为网络安全、社交网络。
    LIU Yuan, born in 1967, professor, senior member of CCF. His research interests include network security and social network.
  • 基金资助:
    国家自然科学基金(61972182)

Abstract:

In recent years, the communication between human and computer has reached the inseparable degree, so the natural language processing as interaction technology between human and machine attracts more and more attention from researchers. Text generation is one of the common tasks of natural language processing. Currently, generative adversarial networks (GAN) is widely used in the field of text generation and the performance is excellent. To solve the sparse problem of scalar guided signals in traditional generative adversarial network discriminator and the limitation of learning only partial semantic information of text, a model based on the combination of multi-head self-attention leak generative adversarial networks (SALGAN) is proposed. Firstly, the feature vector is extracted by using the CNN model integrated with the multi-head self-attention mechanism as the feature extractor to enhance the feature extraction ability. Secondly, the features extracted by the discriminator are sent to the generator as step-by-step guidance signals to guide the generator to generate text, which makes the generated text more inclined to the reference text. Finally, the generator generates the text and passes it to the discriminator to determine whether it is true or not, in order to confirm whether the text meets the standards of human language. Experiments are carried out on two real datasets, COCO image captions and EMNLP2017 news, and the BLEU index is used for evaluation. The experimental results show that the text contains global semantic information after the multi-head self-attention mechanism is integrated into the CNN model, and the feature extraction performance of the CNN model is significantly improved.

Key words: generative adversarial networks (GAN), multi-head self-attention mechanism, text generation, deep learning, gated recurrent unit (GRU), natural language processing

摘要:

近年来,人类与计算机之间的通信已经达到密不可分的程度,因此自然语言处理作为人类与机器进行交互的技术越来越受到研究者的关注,文本生成是自然语言处理中的常见任务之一,目前生成对抗网络在文本生成领域广泛使用并且性能优异。针对传统生成对抗网络判别器的标量指导信号稀疏性问题和仅能学习文本局部语义信息的限制,提出了一种基于多头自注意力机制与LeakGAN结合的模型(SALGAN)。首先,采用融入多头自注意力机制的CNN模型作为特征提取器提取特征向量,以增强特征提取能力;其次,判别器提取的特征作为逐步指导信号发送给生成器以指导生成器生成文本,使得生成的文本更倾向于参考文本;最后,生成器完成文本生成后传递给判别器判断真假,确认文本是否符合人类语言标准。在两个真实数据集COCO图像字幕和EMNLP2017新闻上进行实验,采用BLEU指标进行评估。实验结果表明,将多头自注意力机制融入CNN模型后文本包含全局语义信息,CNN模型的特征提取性能有明显的提升。

关键词: 生成对抗网络(GAN), 多头自注意力机制, 文本生成, 深度学习, 门控循环单元(GRU), 自然语言处理

CLC Number: