Journal of Frontiers of Computer Science and Technology ›› 2022, Vol. 16 ›› Issue (3): 621-636.DOI: 10.3778/j.issn.1673-9418.2109014
• Artificial Intelligence • Previous Articles Next Articles
CHEN Gongchi1, RONG Huan1,+(), MA Tinghuai2
Received:
2021-09-06
Revised:
2021-11-22
Online:
2022-03-01
Published:
2021-11-30
About author:
CHEN Gongchi, born in 2000. His research interests include natural language processing, text summarization, etc.Supported by:
通讯作者:
+ E-mail: ronghuan@nuist.edu.cn作者简介:
陈共驰(2000—),男,四川自贡人,主要研究方向为自然语言处理、文本摘要等。基金资助:
CLC Number:
CHEN Gongchi, RONG Huan, MA Tinghuai. Abstractive Text Summarization Model with Coherence Reinforcement and No Ground Truth Dependency[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(3): 621-636.
陈共驰, 荣欢, 马廷淮. 面向连贯性强化的无真值依赖文本摘要模型[J]. 计算机科学与探索, 2022, 16(3): 621-636.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2109014
数据集 | 文档数量 | 文档平均长度 | 摘要平均长度 | “金标准”摘要中新出现的二元组占比/% | ||||
---|---|---|---|---|---|---|---|---|
训练 | 验证 | 测试 | 单词个数 | 句子长度 | 单词个数 | 句子长度 | ||
CNN | 90 266 | 1 220 | 1 093 | 760.50 | 33.98 | 45.70 | 3.59 | 52.90 |
Daily Mail | 196 961 | 12 148 | 10 397 | 653.33 | 29.33 | 54.65 | 3.86 | 52.16 |
XSum | 204 045 | 11 332 | 11 334 | 54.70 | 19.77 | 23.26 | 1.00 | 83.31 |
Table 1 Statistical information of CNN/Daily Mail and XSum datasets
数据集 | 文档数量 | 文档平均长度 | 摘要平均长度 | “金标准”摘要中新出现的二元组占比/% | ||||
---|---|---|---|---|---|---|---|---|
训练 | 验证 | 测试 | 单词个数 | 句子长度 | 单词个数 | 句子长度 | ||
CNN | 90 266 | 1 220 | 1 093 | 760.50 | 33.98 | 45.70 | 3.59 | 52.90 |
Daily Mail | 196 961 | 12 148 | 10 397 | 653.33 | 29.33 | 54.65 | 3.86 | 52.16 |
XSum | 204 045 | 11 332 | 11 334 | 54.70 | 19.77 | 23.26 | 1.00 | 83.31 |
组合 | 伪摘要抽取 | 模块A | 模块B (编码) | 模块B (编码+关键语句抽取) | 模块B (预训练+编码+关键语句抽取) | 模块C | 连贯性强化 (连贯性收益) | 连贯性强化 (内容收益) |
---|---|---|---|---|---|---|---|---|
1 | √ | √ | √ | |||||
2 | √ | √ | √ | √ | ||||
3 | √ | √ | √ | √ | ||||
4 | √ | √ | √ | √ | √ | |||
5 | √ | √ | √ | √ | √ | |||
6 | √ | √ | √ | √ | √ | √ |
Table 2 Ablation combinations of ATS_CG corresponding to Fig.1
组合 | 伪摘要抽取 | 模块A | 模块B (编码) | 模块B (编码+关键语句抽取) | 模块B (预训练+编码+关键语句抽取) | 模块C | 连贯性强化 (连贯性收益) | 连贯性强化 (内容收益) |
---|---|---|---|---|---|---|---|---|
1 | √ | √ | √ | |||||
2 | √ | √ | √ | √ | ||||
3 | √ | √ | √ | √ | ||||
4 | √ | √ | √ | √ | √ | |||
5 | √ | √ | √ | √ | √ | |||
6 | √ | √ | √ | √ | √ | √ |
组合 | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-AVG | METEOR |
---|---|---|---|---|---|
1 | 36.85 | 16.08 | 34.87 | 29.27 | 18.62 |
2 | 37.78 | 17.82 | 36.05 | 30.55 | 18.70 |
3 | 39.74 | 18.27 | 37.85 | 31.95 | 18.92 |
4 | 41.64 | 19.13 | 39.33 | 33.37 | 19.70 |
5 | 42.01 | 19.28 | 39.71 | 33.67 | 19.48 |
6 | 43.29 | 20.55 | 40.13 | 34.66 | 20.51 |
Table 3 Evaluation results of ablation combinations on CNN/Daily Mail dataset %
组合 | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-AVG | METEOR |
---|---|---|---|---|---|
1 | 36.85 | 16.08 | 34.87 | 29.27 | 18.62 |
2 | 37.78 | 17.82 | 36.05 | 30.55 | 18.70 |
3 | 39.74 | 18.27 | 37.85 | 31.95 | 18.92 |
4 | 41.64 | 19.13 | 39.33 | 33.37 | 19.70 |
5 | 42.01 | 19.28 | 39.71 | 33.67 | 19.48 |
6 | 43.29 | 20.55 | 40.13 | 34.66 | 20.51 |
组合 | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-AVG | METEOR |
---|---|---|---|---|---|
1 | 35.46 | 14.30 | 30.71 | 26.82 | 16.65 |
2 | 37.14 | 17.81 | 31.65 | 28.87 | 16.94 |
3 | 38.92 | 16.82 | 32.71 | 29.48 | 17.05 |
4 | 39.53 | 18.65 | 33.38 | 30.52 | 18.37 |
5 | 39.87 | 18.28 | 33.90 | 30.68 | 18.46 |
6 | 41.97 | 18.23 | 33.84 | 31.35 | 18.86 |
Table 4 Evaluation results of ablation combinations on XSum dataset %
组合 | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-AVG | METEOR |
---|---|---|---|---|---|
1 | 35.46 | 14.30 | 30.71 | 26.82 | 16.65 |
2 | 37.14 | 17.81 | 31.65 | 28.87 | 16.94 |
3 | 38.92 | 16.82 | 32.71 | 29.48 | 17.05 |
4 | 39.53 | 18.65 | 33.38 | 30.52 | 18.37 |
5 | 39.87 | 18.28 | 33.90 | 30.68 | 18.46 |
6 | 41.97 | 18.23 | 33.84 | 31.35 | 18.86 |
摘要生成方式 | 模型方法 | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-AVG | METEOR | |
---|---|---|---|---|---|---|---|
抽取型 | MMS_Text | 37.57 | 15.72 | 34.42 | 29.24 | 16.97 | |
SummaRuNNer | 38.60 | 15.20 | 34.30 | 29.37 | 16.75 | ||
Refresh | 39.27 | 17.20 | 35.60 | 30.69 | 17.38 | ||
HSSAS | 41.30 | 16.80 | 36.60 | 31.57 | 18.27 | ||
生成型 | 有监督 | Pointer-Generator + Coverage | 38.53 | 16.28 | 35.38 | 30.06 | 17.70 |
Bottom-Up | 40.22 | 17.68 | 37.34 | 31.75 | 18.38 | ||
DCA | 40.69 | 18.47 | 36.92 | 32.03 | 18.55 | ||
BERTSUMEXTABS | 41.13 | 18.60 | 38.18 | 32.64 | 18.91 | ||
PEGASUSBASE | 40.79 | 17.81 | 37.93 | 32.18 | 18.63 | ||
无监督 | ATS_CG4( | 41.64 | 19.13 | 39.33 | 33.37 | 19.70 | |
ATS_CG5( | 42.01 | 19.28 | 39.71 | 33.67 | 19.48 | ||
ATS_CG6( | 43.29 | 20.55 | 40.13 | 34.66 | 20.51 |
Table 5 Evalution results of generated summarization on CNN/Daily Mail dataset %
摘要生成方式 | 模型方法 | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-AVG | METEOR | |
---|---|---|---|---|---|---|---|
抽取型 | MMS_Text | 37.57 | 15.72 | 34.42 | 29.24 | 16.97 | |
SummaRuNNer | 38.60 | 15.20 | 34.30 | 29.37 | 16.75 | ||
Refresh | 39.27 | 17.20 | 35.60 | 30.69 | 17.38 | ||
HSSAS | 41.30 | 16.80 | 36.60 | 31.57 | 18.27 | ||
生成型 | 有监督 | Pointer-Generator + Coverage | 38.53 | 16.28 | 35.38 | 30.06 | 17.70 |
Bottom-Up | 40.22 | 17.68 | 37.34 | 31.75 | 18.38 | ||
DCA | 40.69 | 18.47 | 36.92 | 32.03 | 18.55 | ||
BERTSUMEXTABS | 41.13 | 18.60 | 38.18 | 32.64 | 18.91 | ||
PEGASUSBASE | 40.79 | 17.81 | 37.93 | 32.18 | 18.63 | ||
无监督 | ATS_CG4( | 41.64 | 19.13 | 39.33 | 33.37 | 19.70 | |
ATS_CG5( | 42.01 | 19.28 | 39.71 | 33.67 | 19.48 | ||
ATS_CG6( | 43.29 | 20.55 | 40.13 | 34.66 | 20.51 |
摘要生成方式 | 模型方法 | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-AVG | METEOR |
---|---|---|---|---|---|---|
有监督 | Pointer-Generator + Coverage | 27.10 | 7.02 | 20.72 | 18.28 | 10.31 |
Bottom-Up | 29.02 | 8.45 | 22.96 | 20.14 | 11.09 | |
BERTSUMEXTABS | 37.81 | 15.50 | 30.27 | 27.86 | 14.14 | |
PEGASUSBASE | 38.79 | 15.58 | 30.70 | 28.36 | 14.82 | |
无监督 | ATS_CG4( | 39.53 | 18.65 | 33.38 | 30.52 | 18.37 |
ATS_CG5( | 39.87 | 18.28 | 33.90 | 30.68 | 18.46 | |
ATS_CG6( | 41.97 | 18.23 | 33.84 | 31.35 | 18.86 |
Table 6 Evaluation results of generated summarization on XSum dataset %
摘要生成方式 | 模型方法 | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-AVG | METEOR |
---|---|---|---|---|---|---|
有监督 | Pointer-Generator + Coverage | 27.10 | 7.02 | 20.72 | 18.28 | 10.31 |
Bottom-Up | 29.02 | 8.45 | 22.96 | 20.14 | 11.09 | |
BERTSUMEXTABS | 37.81 | 15.50 | 30.27 | 27.86 | 14.14 | |
PEGASUSBASE | 38.79 | 15.58 | 30.70 | 28.36 | 14.82 | |
无监督 | ATS_CG4( | 39.53 | 18.65 | 33.38 | 30.52 | 18.37 |
ATS_CG5( | 39.87 | 18.28 | 33.90 | 30.68 | 18.46 | |
ATS_CG6( | 41.97 | 18.23 | 33.84 | 31.35 | 18.86 |
摘要生成方式 | 模型方法 | 语句连贯性 ↑ | 内容低冗余 ↑ | 涵盖重要内容 ↑ |
---|---|---|---|---|
有监督 | Pointer-Generator + Coverage | 3.51 | 2.88 | 2.97 |
Bottom-Up | 3.45 | 3.17 | 2.85 | |
DCA | 3.23 | 3.06 | 3.08 | |
BERTSUMEXTABS | 3.43 | 3.01 | 2.85 | |
PEGASUSBASE | 3.29 | 2.95 | 3.01 | |
无监督 | ATS_CG6( | 3.92 | 3.29 | 3.13 |
Table 7 Manual evaluation results of summary quality on CNN/Daily Mail dataset
摘要生成方式 | 模型方法 | 语句连贯性 ↑ | 内容低冗余 ↑ | 涵盖重要内容 ↑ |
---|---|---|---|---|
有监督 | Pointer-Generator + Coverage | 3.51 | 2.88 | 2.97 |
Bottom-Up | 3.45 | 3.17 | 2.85 | |
DCA | 3.23 | 3.06 | 3.08 | |
BERTSUMEXTABS | 3.43 | 3.01 | 2.85 | |
PEGASUSBASE | 3.29 | 2.95 | 3.01 | |
无监督 | ATS_CG6( | 3.92 | 3.29 | 3.13 |
摘要生成方式 | 模型方法 | N-1/%↑ | N-2/%↑ | N-3/%↑ | N-4/%↑ | N-5/%↑ | 摘要困惑度↓ |
---|---|---|---|---|---|---|---|
有监督 | Pointer-Generator+Coverage | 3.57 | 5.21 | 8.96 | 17.25 | 43.27 | 22.61 |
Bottom-Up | 4.28 | 8.72 | 7.54 | 20.03 | 49.37 | 24.31 | |
DCA | 4.21 | 8.96 | 9.43 | 18.96 | 51.23 | 28.37 | |
BERTSUMEXTABS | 4.82 | 16.07 | 9.27 | 22.41 | 60.01 | 18.62 | |
PEGASUSBASE | 4.42 | 17.81 | 19.77 | 27.33 | 79.12 | 17.83 | |
无监督 | ATS_CG6( | 5.27 | 19.32 | 23.42 | 39.27 | 82.31 | 15.28 |
Table 8 Results of N-gram novelty and perplexity on CNN/Daily Mail dataset
摘要生成方式 | 模型方法 | N-1/%↑ | N-2/%↑ | N-3/%↑ | N-4/%↑ | N-5/%↑ | 摘要困惑度↓ |
---|---|---|---|---|---|---|---|
有监督 | Pointer-Generator+Coverage | 3.57 | 5.21 | 8.96 | 17.25 | 43.27 | 22.61 |
Bottom-Up | 4.28 | 8.72 | 7.54 | 20.03 | 49.37 | 24.31 | |
DCA | 4.21 | 8.96 | 9.43 | 18.96 | 51.23 | 28.37 | |
BERTSUMEXTABS | 4.82 | 16.07 | 9.27 | 22.41 | 60.01 | 18.62 | |
PEGASUSBASE | 4.42 | 17.81 | 19.77 | 27.33 | 79.12 | 17.83 | |
无监督 | ATS_CG6( | 5.27 | 19.32 | 23.42 | 39.27 | 82.31 | 15.28 |
[1] |
CONDORI R E L, PARDO T A S. Opinion summarization methods: comparing and extending extractive and abstractive approaches[J]. Expert Systems with Applications, 2017, 78: 124-134.
DOI URL |
[2] |
EL-KASSAS W S, SALAMA C R, RAFEA A A, et al. Automatic text summarization: a comprehensive survey[J]. Expert Systems with Applications, 2021, 165: 113679.
DOI URL |
[3] | LIN H, NG V. Abstractive summarization: a survey of the state of the art[C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence, the 31st Innovative Applications of Artificial Intelligence Conference, the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, Honolulu, Jan 27-Feb 1, 2019. Menlo Park: AAAI, 2019: 9815-9822. |
[4] | 李金鹏, 张闯, 陈小军, 等. 自动文本摘要研究综述[J]. 计算机研究与发展, 2021, 58(1): 1-21. |
LI J P, ZHANG C, CHEN X J, et al. Survey on automatic text summarization[J]. Journal of Computer Research and Development, 2021, 58(1): 1-21. | |
[5] | DAI Z H, YANG Z L, YANG Y M, et al. Transformer-XL: attentive language models beyond a fixed-length context[C]// Proceedings of the 57th Conference of the Association for Computational Linguistics, Florence, Jul 28-Aug 2, 2019. Stroudsburg: ACL, 2019: 2978-2988. |
[6] | LIN C Y. ROUGE: a package for automatic evaluation of summaries[C]// Proceedings of the 2004 Workshop on Text Summarization Branches Out, Post-Conference Workshop of 42nd Annual Meeting of the ACL, Barcelona, Jul 21-26, 2004. Stroudsburg: ACL, 2004: 1-8. |
[7] | RENNIE S J, MARCHERET E, MROUEH Y, et al. Self-critical sequence training for image captioning[C]// Procee-dings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 1179-1195. |
[8] | DENKOWSKI M J, LAVIE A. Meteor universal: language specific translation evaluation for any target language[C]// Proceedings of the 9th Workshop on Statistical Machine Translation, Baltimore, Jun 26-27, 2014. Stroudsburg: ACL, 2014: 376-380. |
[9] | NALLAPATI R, ZHOU B W, DOS SANTOS C N, et al. Abstractive text summarization using sequence-to-sequence RNNs and beyond[C]// Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, Berlin, Aug 11-12, 2016. Stroudsburg: ACL, 2016: 280-290. |
[10] | CHOPRA S, AULI M, RUSH A M. Abstractive sentence summarization with attentive recurrent neural networks[C]// Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, Jun 12-17, 2016. Stroudsburg: ACL, 2016: 93-98. |
[11] | SEE A, LIU P J, MANNING C D. Get to the point: summarization with pointer-generator networks[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Jul 30-Aug 4, 2017. Stroudsburg: ACL, 2017: 1073-1083. |
[12] | COHAN A, DERNONCOURT F, KIM D S, et al. A discourse-aware attention model for abstractive summarization of long documents[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Jun 1-6, 2018. Stroudsburg: ACL, 2018: 615-621. |
[13] | PAULUS R, XIONG C M, SOCHER R. A deep reinforced model for abstractive summarization[C]// Proceedings of the 6th International Conference on Learning Representations, Vancouver, Apr 30-May 3, 2018: 1-13. |
[14] |
WILLIAMS R J, ZIPSER D. A learning algorithm for continually running fully recurrent neural networks[J]. Neural Computation, 1989, 1(2): 270-280.
DOI URL |
[15] | CELIKYILMAZ A, BOSSELUT A, HE X D, et al. Deep communicating agents for abstractive summarization[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Jun 1-6, 2018. Stroudsburg: ACL, 2018: 1662-1675. |
[16] | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Proceedings of the 2017 Annual Conference on Neural Information Processing Systems, Long Beach, Dec 4-9, 2017. Red Hook: Curran Associates, 2017: 5998-6008. |
[17] | LAPATA M, BARZILAY R. Automatic evaluation of text coherence: models and representations[C]// Proceedings of the 19th International Joint Conference on Artificial Intelligence, Edinburgh, Jul 30-Aug 5, 2005. San Mateo: Morgan Kaufmann, 2005: 1085-1090. |
[18] | LIN Z H, FENG M W, SANTOS C N, et al. A structured self-attentive sentence embedding[J]. arXiv:1703.03130, 2017. |
[19] | LIU Y, LAPATA M. Text summarization with pretrained encoders[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, Nov 3-7, 2019. Stroudsburg: ACL, 2019: 3728-3738. |
[20] | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understan-ding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Jun 2-7, 2019. Stroudsburg: ACL, 2019: 4171-4186. |
[21] | ZHANG J Q, ZHAO Y, SALEH M, et al. Pegasus: pre-training with extracted gap-sentences for abstractive summarization[C]// Proceedings of the 37th International Conference on Machine Learning, Jul 12-18, 2020: 11328-11339. |
[22] | 王侃, 曹开臣, 徐畅, 等. 基于改进Transformer模型的文本摘要生成方法[J]. 电讯技术, 2019, 59(10): 1175-1181. |
WANG K, CAO K C, XU C, et al. Text abstract generation based on improved transformer model[J]. Telecommunication Engineering, 2019, 59(10): 1175-1181. | |
[23] | PETERS M, NEUMANN M, IYYER M, et al. Deep contextualized word representations[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Jun 1-6, 2018. Minneapolis: NAACL, 2018: 2227-2237. |
[24] | PILAULT J, LI R, SUBRAMANIAN S, et al. On extractive and abstractive neural document summarization with transformer language models[C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Nov 16-20, 2020. Stroudsburg: ACL, 2020: 9308-9319. |
[25] | CHU E, LIU P. Meansum: a neural model for unsupervised multi-document abstractive summarization[C]// Proceedings of the 2019 International Conference on Machine Learning, Long Beach, Jun 9-15, 2019: 1223-1232. |
[26] | LI S, LEI D, QIN P, et al. Deep reinforcement learning with distributional semantic rewards for abstractive summarization[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, Nov 3-7, 2019. Stroudsburg: ACL, 2019: 6038-6044. |
[27] | ZHANG T, KISHORE V, WU F, et al. BERTScore: evaluating text generation with BERT[J]. arXiv:1904.09675, 2019. |
[28] | CHEN Y C, BANSAL M. Fast abstractive summarization with reinforce-selected sentence rewriting[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Jul 15-20, 2018. Stroudsburg: ACL, 2018: 675-686. |
[29] | JELINEK F, MERCER R L, BAHL L R, et al. Perplexity—a measure of the difficulty of speech recognition tasks[J]. The Journal of the Acoustical Society of America, 1977, 62(S1): S63. |
[30] | LAN Z Z, CHEN M D, GOODMAN S, et al. ALBERT: a lite bert for self-supervised learning of language representations[C]// Proceedings of the 8th International Conference on Learning Representations, Apr 26-May 1, 2020: 1-17. |
[31] | CLARK K, KHANDELWAL U, LEVY O, et al. What does BERT look at? An analysis of BERT’s attention[C]// Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Florence, Aug 1, 2019. Stroudsburg: ACL, 2019: 276-286. |
[32] | WISEMAN S, RUSH A M. Sequence-to-sequence learning as beam-search optimization[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Nov 1-4, 2016. Stroudsburg: ACL, 2016: 1296-1306. |
[33] |
RONG H, SHENG V S, MA T H, et al. A self-play and sentiment-emphasized comment integration framework based on deep Q-learning in a crowdsourcing scenario[J]. IEEE Transactions on Knowledge and Data Engineering, 2022, 34(3): 1021-1037.
DOI URL |
[34] |
DE BOER P T, KROESE D P, MANNOR S, et al. A tutorial on the cross-entropy method[J]. Annals of Operations Research, 2005, 134(1): 19-67.
DOI URL |
[35] | LI H R, ZHU J N, MA C, et al. Read, watch, listen, and summarize: multi-modal summarization for asynchronous text, image, audio and video[J]. IEEE Transactions on Know-ledge and Data Engineering, 2018, 31(5): 996-1009. |
[36] | NALLAPATI R, ZHAI F F, ZHOU B W. Summarunner: a recurrent neural network based sequence model for extractive summarization of documents[C]// Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, Feb 4-9, 2017. Menlo Park: AAAI, 2017: 3075-3081. |
[37] | NARAYAN S, COHEN S B, LAPATA M. Ranking sentences for extractive summarization with reinforcement learning[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Jun 1-6, 2018. Stroudsburg: ACL, 2018: 1747-1759. |
[38] |
AL-SABAHI K, ZHANG Z P, NADHER M. A hierarchical structured self-attentive model for extractive document summarization (HSSAS)[J]. IEEE Access, 2018, 6: 24205-24212.
DOI URL |
[39] | GEHRMANN S, DENG Y T, RUSH A M. Bottom-up abstractive summarization[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Oct 31-Nov 4, 2018. Stroudsburg: ACL, 2018: 4098-4109. |
[1] | XIA Hongbin, XIAO Yifei, LIU Yuan. Long Text Generation Adversarial Network Model with Self-Attention Mechanism [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1603-1610. |
[2] | HAN Yi, QIAO Linbo, LI Dongsheng, LIAO Xiangke. Review of Knowledge-Enhanced Pre-trained Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1439-1461. |
[3] | WANG Yang, CHEN Zhibin, WU Zhaorui, GAO Yuan. Review of Reinforcement Learning for Combinatorial Optimization Problem [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(2): 261-279. |
[4] | LIU Jingxin, WANG Yan, HAN Xiao, XIA Changqing, SONG Baoyan. Research on Edge Cloud Resource Pricing Mechanism Based on Stackelberg Game [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(1): 153-162. |
[5] | CHEN Bin, LIU Weiguo. SAC Model Based Improved Genetic Algorithm for Solving TSP [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(9): 1680-1693. |
[6] | CHEN Deguang, MA Jinlin, MA Ziping, ZHOU Jie. Review of Pre-training Techniques for Natural Language Processing [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(8): 1359-1389. |
[7] | REN Jianhua, LI Jing, MENG Xiangfu. Document Classification Method Based on Context Awareness and Hierarchical Attention Network [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(2): 305-314. |
[8] | YAN Chunman, WANG Cheng. Development and Application of Convolutional Neural Network Model [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(1): 27-46. |
[9] | YAN Dan, HE Jun, LIU Hongyan, DU Xiaoyong. Considering Grade Information for Music Comment Text Automatic Generation [J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(8): 1389-1396. |
[10] | ZHAO Tingting, KONG Le, HAN Yajie, REN Dehua, CHEN Yarui. Review of Model-Based Reinforcement Learning [J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(6): 918-927. |
[11] | LIU Zhongqiang, YOU Xiaoming, LIU Sheng. Two-Population Ant Colony Algorithm Based on Heuristic Reinforcement Learning [J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(3): 460-469. |
[12] | YANG Min, WANG Jie. Bayesian Deep Reinforcement Learning Algorithm for Solving Deep Exploration Problems [J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(2): 307-316. |
[13] | ZHU Rui, MA Yongtao, NAN Yafei, ZHANG Yunlei. Cognitive Radio Anti-Jamming Decision Algorithm Based on Improved Reinforcement Learning [J]. Journal of Frontiers of Computer Science and Technology, 2019, 13(4): 693-701. |
[14] | HUANG Lei, LI Shoushan, WANG Jingjing. User-Type Classification in Micro-Blog Based on Information of Authenticated User [J]. Journal of Frontiers of Computer Science and Technology, 2015, 9(6): 719-725. |
[15] | YU Xueli, LI Zhi, ZHOU Changneng, CUI Qian, HU Kun. Analysis and Integration of Heterogeneous Feedback Signals for Reinforcement Learning [J]. Journal of Frontiers of Computer Science and Technology, 2012, 6(4): 366-376. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/