[1] 朱永清, 赵鹏, 赵菲菲, 等. 基于深度学习的生成式文本摘要技术综述[J]. 计算机工程, 2021, 47(11): 11-21.
ZHU Y Q, ZHAO P, ZHAO F F, et al. Survey on abstractive text summarization technologies based on deep learning[J]. Computer Engineering, 2021, 47(11): 11-21.
[2] 车万翔, 窦志成, 冯岩松, 等. 大模型时代的自然语言处理: 挑战、机遇与发展[J]. 中国科学: 信息科学, 2023, 53(9): 1645-1687.
CHE W X, DOU Z C, FENG Y S, et al. Towards a comprehensive understanding of the impact of large language models on natural language processing: challenges, opportunities and future directions[J]. Scientia Sinica (Informationis), 2023, 53(9): 1645-1687.
[3] HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. [2024-04-19]. https://arxiv.org/abs/1503.02531.
[4] 全安坤, 李红莲, 张乐, 等. 融合内容和图片特征的中文摘要生成方法研究[J]. 数据分析与知识发现, 2024, 8(3): 110-119.
QUAN A K, LI H L, ZHANG L, et al. Generating Chinese abstracts with content and image features[J]. Data Analysis and Knowledge Discovery, 2024, 8(3): 110-119.
[5] 王宗辉, 李宝安, 吕学强, 等. BETES: 一种中文长文档抽取式摘要方法[J]. 小型微型计算机系统, 2022, 43(1): 42-49.
WANG Z H, LI B A, LV X Q, et al. Method of extractive summarization Chinese long documents[J]. Journal of Chinese Computer Systems, 2022, 43(1): 42-49.
[6] ZHANG J, ZHAO Y, SALEH M, et al. PEGASUS: pre-training with extracted gap-sentences for abstractive summarization[C]//Proceedings of the 37th International Conference on Machine Learning, 2020: 11328-11339.
[7] 徐尔卓. 基于深度学习的中文文本摘要方法研究[D]. 南宁: 广西民族大学, 2023.
XU E Z. Research on Chinese text summary method based on deep learning[D]. Nanning: Guangxi University for Nationalities, 2023.
[8] 樊琦. 基于图神经网络的生成式摘要研究与应用[D]. 兰州: 西北师范大学, 2022.
FAN Q. Research and application of abstractive summarization based on graph neural network[D]. Lanzhou: Northwest Normal University, 2022.
[9] 李健智, 王红玲, 王中卿. 基于场景与对话结构的摘要生成研究[J]. 计算机工程, 2023, 49(4): 303-311.
LI J Z, WANG H L, WANG Z Q. Research on summarization generation based on scene and dialogue structure[J]. Computer Engineering, 2023, 49(4): 303-311.
[10] 张贤明. 基于中文科技论文摘要的生成模型关键技术研究[D]. 济南: 齐鲁工业大学, 2023.
ZHANG X M. Research on key technologies of generative model based on abstracts of Chinese scientific papers[D]. Jinan: Qilu University of Technology, 2023.
[11] 张琪, 范永胜. 基于改进T5 PEGASUS模型的新闻文本摘要生成[J]. 电子科技, 2023, 36(12): 72-78.
ZHANG Q, FAN Y S. Research on generating news text summarization based on improved T5 PEGASUS model[J]. Electronic Science and Technology, 2023, 36(12): 72-78.
[12] 张乐, 冷基栋, 吕学强, 等. RLCPAR: 一种基于强化学习的中文专利摘要改写模型[J]. 数据分析与知识发现, 2021, 5(7): 59-69.
ZHANG L, LENG J D, LV X Q, et al. RLCPAR: a rewriting model for Chinese patent abstracts based on reinforcement learning[J]. Data Analysis and Knowledge Discovery, 2021, 5(7): 59-69.
[13] 崔少国, 王奥迪, 杜兴. 融合流注意力机制的中文摘要生成方法[J]. 小型微型计算机系统, 2023, 44(12): 2685-2691.
CUI S G, WANG A D, DU X. Chinese abstract generation method incorporating flow attention mechanism[J]. Journal of Chinese Computer Systems, 2023, 44(12): 2685-2691.
[14] 李宝安, 佘鑫鹏, 常振宁, 等. 中文新闻文本多文档摘要生成[J]. 计算机工程与设计, 2023, 44(9): 2867-2873.
LI B A, SHE X P, CHANG Z N, et al. Multi-document summary generation of Chinese news text[J]. Computer Engineering and Design, 2023, 44(9): 2867-2873.
[15] BUCILA C, CARUANA R, NICULESCU-MIZIL A. Model compression[C]//Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2006: 535-541.
[16] ROMERO A, BALLAS N, KAHOU S E, et al. FitNets: hints for thin deep nets[EB/OL]. [2024-04-19]. https://arxiv.org/abs/1412.6550.
[17] YANG Z, SHOU L J, GONG M, et al. Model compression with multi-task knowledge distillation for web-scale question answering system[EB/OL]. [2024-04-19]. https://arxiv.org/abs/1904.09636.
[18] GORDON M A, DUH K. Explaining sequence-level know-ledge distillation as data-augmentation for neural machine translation[EB/OL]. [2024-04-19]. https://arxiv.org/abs/1912. 03334.
[19] HU M H, PENG Y X, WEI F R, et al. Attention-guided answer distillation for machine reading comprehension[EB/OL]. [2024-04-22]. https://arxiv.org/abs/1808.07644.
[20] 王奎芳, 吕璐成, 孙文君, 等. 基于大模型知识蒸馏的专利技术功效词自动抽取方法研究: 以车联网V2X领域为例[J]. 数据分析与知识发现, 2024, 8(S1): 144-156.
WANG K F, LYU L C, SUN W J, et al. Automatic extraction of patent technical effect words based on large model knowledge distillation: a case study of V2X in Internet of vehicle[J]. Data Analysis and Knowledge Discovery, 2024, 8(S1): 144-156.
[21] 汪珶. 基于知识蒸馏改进双路BERT的经济类文本情感分析[J]. 山西师范大学学报(自然科学版), 2024, 38(1): 39-44.
WANG D. Sentiment analysis of economic texts based on improved two-path BERT based on knowledge distillation[J]. Journal of Shanxi Normal University (Natural Science Edition), 2024, 38(1): 39-44.
[22] TSIRMPAS D, GKIONIS I, PAPADOPOULOS G T, et al. Neural natural language processing for long texts: a survey on classification and summarization[EB/OL]. [2024-04-22]. https://arxiv.org/abs/2305.16259.
[23] COLIN R, NOAM S, ADAM R, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2020, 21.
[24] ZAHEER M, GURUGANESH G, DUBEY KA, et al. Big bird: transformers for longer sequences[C]//Advances in Neural Information Processing Systems 33, 2020: 17283-17297.
[25] COMPAORE I F, KAFANDO R, SABANE A, et al. AI-driven generation of news summaries leveraging GPT and Pegasus summarizer for efficient information extraction[C]//Proceedings of the 6th Computer Science Research Days, 2024.
[26] 马畅, 田永红, 郑晓莉, 等. 基于知识蒸馏的神经机器翻译综述[J]. 计算机科学与探索, 2024, 18(7): 1725-1747.
MA C, TIAN Y H, ZHENG X L, et al. Survey of neural machine translation based on knowledge distillation[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(7): 1725-1747.
[27] ZENG A H, LIU X, DU Z X, et al. GLM-130B: an open bilingual pre-trained model[C]//Proceedings of the 11th International Conference on Learning Representations, 2023.
[28] GU J T, LU Z D, LI H, et al. Incorporating copying mechanism in sequence-to-sequence learning[EB/OL]. [2024-04-22]. https://arxiv.org/abs/1603.06393.
[29] 周耀威. 基于对比学习的中文文本自动摘要方法研究[D]. 桂林: 桂林电子科技大学, 2023.
ZHOU Y W. Research on Chinese text automatic summarization method based on contrastive learning[D]. Guilin: Guilin University of Electronic Technology, 2023.
[30] LIN C Y, HOVY E. Manual and automatic evaluation of summaries[C]//Proceedings of the 2002 Workshop on Automatic Summarization, 2002: 45-51. |