
Journal of Frontiers of Computer Science and Technology ›› 2025, Vol. 19 ›› Issue (6): 1395-1413.DOI: 10.3778/j.issn.1673-9418.2410054
• Frontiers·Surveys • Previous Articles Next Articles
XU Delong, LIN Min, WANG Yurong, ZHANG Shujun
Online:2025-06-01
Published:2025-05-29
许德龙,林民,王玉荣,张树钧
XU Delong, LIN Min, WANG Yurong, ZHANG Shujun. Survey of NLP Data Augmentation Methods Based on Large Language Models[J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1395-1413.
许德龙, 林民, 王玉荣, 张树钧. 基于大语言模型的NLP数据增强方法综述[J]. 计算机科学与探索, 2025, 19(6): 1395-1413.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2410054
| [1] ZHAO W X, ZHOU K, LI J, et al. A survey of large language models[EB/OL]. [2024-08-23]. https://arXiv.org/abs/2303.18223. [2] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436-444. [3] MAAS A L, HANNUN A Y, NG A Y. Rectifier nonlinearities improve neural network acoustic models[C]//Proceedings of the 30th International Conference on Machine Learning, 2013. [4] VASWANI A. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, 2017: 5998-6008. [5] SCHULMAN J, ZOPH B, KIM C, et al. ChatGPT: optimizing language models for dialogue[J]. OpenAI Blog, 2022, 2(4). [6] SHORTEN C, KHOSHGOFTAAR T M, FURHT B. Text data augmentation for deep learning[J]. Journal of Big Data, 2021, 8(1): 101. [7] CARLINI N, ATHALYE A, PAPERNOT N, et al. On evaluating adversarial robustness[EB/OL]. [2024-08-23]. https://arXiv.org/abs/1902.06705. [8] 冯冉, 陈丹蕾, 化柏林. 文本数据的增强方法研究综述[J/OL]. 数据分析与知识发现 (2024-09-12)[2024-09-16]. https://kns.cnki.net/kcms/detail/10.1478.G2.20240911.1803.004.html. FENG R, CHEN D L, HUA B L. A survey of data augmentation methods in natural language processing[J/OL]. Data Analysis and Knowledge Discovery (2024-09-12) [2024-09-16]. https://kns.cnki.net/kcms/detail/10.1478.G2.20240911.1803.004.html. [9] 葛轶洲, 许翔, 杨锁荣, 等. 序列数据的数据增强方法综述[J]. 计算机科学与探索, 2021, 15(7): 1207-1219. GE Y Z, XU X, YANG S R, et al. Survey on sequence data augmentation[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(7): 1207-1219. [10] 杨锁荣, 杨洪朝, 申富饶, 等. 面向深度学习的图像数据增强综述[J]. 软件学报, 2025, 36(3): 1390-1412. YANG S R, YANG H C, SHEN F R, et al. Image data augmentation for deep learning: a survey[J]. Journal of Software, 2025, 36(3): 1390-1412. [11] SHORTEN C, KHOSHGOFTAAR T M. A survey on image data augmentation for deep learning[J]. Journal of Big Data, 2019, 6(1): 60. [12] LIU P, WANG X M, XIANG C, et al. A survey of text data augmentation[C]//Proceedings of the 2020 International Conference on Computer Communication and Network Security. Piscataway: IEEE, 2020: 191-195. [13] FENG S Y, GANGAL V, WEI J, et al. A survey of data augmentation approaches for NLP[EB/OL]. [2024-08-23]. https://arXiv.org/abs/2105.03075. [14] BAYER M, KAUFHOLD M A, REUTER C. A survey on data augmentation for text classification[J]. ACM Computing Surveys, 2023, 55(7): 1-39. [15] LI B H, HOU Y T, CHE W X. Data augmentation approaches in natural language processing: a survey[J]. AI Open, 2022, 3: 71-90. [16] WEI J, ZOU K. EDA: easy data augmentation techniques for boosting performance on text classification tasks[EB/OL]. [2024-08-23]. https://arXiv.org/abs/1901.11196. [17] WANG W Y, YANG D Y. That’s so annoying!!!: a lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using #petpeeve Tweets[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2015: 2557-2563. [18] PENNINGTON J, SOCHER R, MANNING C. GloVe: global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2014: 1532-1543. [19] MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations in vector space[EB/OL]. [2024-08-23]. https://arXiv.org/abs/1301.3781. [20] BOJANOWSKI P, GRAVE E, JOULIN A, et al. Enriching word vectors with subword information[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 135-146. [21] WU X, LV S W, ZANG L J, et al. Conditional BERT contextual augmentation[C]//Proceedings of the 19th International Conference on Computational Science. Cham: Springer, 2019: 84-95. [22] COULOMBE C. Text data augmentation made simple by leveraging NLP cloud APIs[EB/OL]. [2024-08-23]. https://arXiv.org/abs/1812.04718. [23] YU A W, DOHAN D, LUONG M T, et al. QANet: combining local convolution with global self-attention for reading comprehension[EB/OL]. [2024-08-23]. https://arXiv.org/abs/1804.09541. [24] HOU Y T, LIU Y J, CHE W X, et al. Sequence-to-sequence data augmentation for dialogue language understanding[EB/OL]. [2024-08-23]. https://arXiv.org/abs/1807.01554. [25] SUTSKEVER I. Sequence to sequence learning with neural networks[EB/OL]. [2024-08-23]. https://arXiv.org/abs/1409.3215. [26] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144. [27] LONGPRE S, WANG Y, DUBOIS C. How effective is task-agnostic data augmentation for pretrained transformers? [EB/OL]. [2024-08-23]. https://arXiv.org/abs/2010.01764. [28] RASTOGI C, MOFID N, HSIAO F I. Can we achieve more with less? Exploring data augmentation for toxic comment classification[EB/OL]. [2024-08-23]. https://arXiv.org/abs/2007.00875. [29] YAN G, LI Y, ZHANG S, et al. Data augmentation for deep learning of judgment documents[C]//Proceedings of the 9th International Conference on Intelligence Science and Big Data Engineering. Big Data and Machine Learning. Cham: Springer, 2019: 232-242. [30] WANG X Y, PHAM H, DAI Z H, et al. SwitchOut: an efficient data augmentation algorithm for neural machine translation[EB/OL]. [2024-09-16]. https://arXiv.org/abs/1808.07512. [31] MIN J, MCCOY R T, DAS D, et al. Syntactic data augmentation increases robustness to inference heuristics[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2004.11999. [32] SENNRICH R, HADDOW B, BIRCH A. Improving neural machine translation models with monolingual data[EB/OL]. [2024-09-16]. https://arXiv.org/abs/1511.06709. [33] ANABY-TAVOR A, CARMELI B, GOLDBRAICH E, et al. Do not have enough data? Deep learning to the rescue![J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(5): 7383-7390. [34] DEVLIN J. BERT: pre-training of deep bidirectional transformers for language understanding[EB/OL]. [2024-09-16]. https://arXiv.org/abs/1810.04805. [35] YANG Y F, JIN N, LIN K, et al. Neural retrieval for question answering with cross-attention supervised data augmentation[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2009.13815. [36] BARI M S, MOHIUDDIN T, JOTY S. UXLA: a robust unsupervised data augmentation framework for zero-resource cross-lingual NLP[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2004.13240. [37] YOO K M, PARK D, KANG J, et al. GPT3Mix: leveraging large-scale language models for text augmentation[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2104.08826. [38] LI Q, PENG H, LI J X, et al. A survey on text classification: from traditional to deep learning[J]. ACM Transactions on Intelligent Systems and Technology, 2022, 13(2): 1-41. [39]WANG Z, WANG P, LIU K, et al. A comprehensive survey on data augmentation[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2405.09591. [40] HU M Q, LIU B. Mining and summarizing customer reviews[C]//Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2004: 168-177. [41] WARSTADT A. Neural network acceptability judgments[EB/OL]. [2024-09-16]. https://arXiv.org/abs/1805.12471. [42] VOORHEES E M. The TREC question answering track[J]. Natural Language Engineering, 2001, 7(4): 361-378. [43] WIEBE J, WILSON T, CARDIE C. Annotating expressions of opinions and emotions in language[J]. Language Resources and Evaluation, 2005, 39(2): 165-210. [44] LEE L, PANG B. Sentiment analysis using subjectivity summarization based on minimum cuts[C]//Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2004: 271-278. [45] YE J J, XU N, WANG Y K, et al. LLM-DA: data augmentation via large language models for few-shot named entity recognition[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2402.14568. [46] LI J, SUN A X, HAN J L, et al. A survey on deep learning for named entity recognition[J]. IEEE Transactions on Knowledge and Data Engineering, 2022, 34(1): 50-70. [47] FAN L, KRISHNAN D, ISOLA P, et al. Improving clip training with language rewrites[C]//Advances in Neural Information Processing Systems 36, 2023. [48] RADFORD A, KIM J W, HALLACY C, et al. Learning transferable visual models from natural language supervision[C]//Proceedings of the 38th International Conference on Machine Learning, 2021: 8748-8763. [49] SAMUEL V, AYNAOU H, CHOWDHURY A G, et al. Can LLMs augment low-resource reading comprehension datasets? Opportunities and challenges[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2309.12426. [50] ZAIB M, ZHANG W E, SHENG Q Z, et al. Conversational question answering: a survey[J]. Knowledge and Information Systems, 2022, 64(12): 3151-3195. [51] M?LLER T, REINA A, JAYAKUMAR R, et al. COVID-QA: a question answering dataset for COVID-19[C]//Procee-dings of the 1st Workshop on NLP for COVID-19 at ACL 2020. Stroudsburg: ACL, 2020. [52] AHMAD W U, CHI J F, TIAN Y, et al. PolicyQA: a reading comprehension dataset for privacy policies[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2010.02557. [53] CASTELLI V, CHAKRAVARTI R, DANA S, et al. The TechQA dataset[EB/OL]. [2024-09-16]. https://arXiv.org/abs/1911.02984. [54] ZANG Z, LUO H, WANG K, et al. Boosting unsupervised contrastive learning using diffusion-based data augmentation from scratch[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2309.07909. [55] GUI J, CHEN T, ZHANG J, et al. A survey on self-supervised learning: algorithms, applications, and future trends[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(12): 9052-9071. [56] LEE N, WATTANAWONG T, KIM S, et al. LLM2LLM: boosting LLMs with novel iterative data enhancement[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2403.15042. [57] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Image-Net classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. [58] THAKUR N, REIMERS N, DAXENBERGER J, et al. Augmented SBERT: data augmentation method for improving bi-encoders for pairwise sentence scoring tasks[EB/OL]. [2024-09-16]. https://arXiv.org/abs/2010.08240. [59] REIMERS N, GUREVYCH I. Sentence-BERT: sentence embeddings using Siamese BERT-networks[EB/OL]. [2024-09-24]. https://arXiv.org/abs/1908.10084. [60] KUMAR V, CHOUDHARY A, CHO E. Data augmentation using pre-trained transformer models[EB/OL]. [2024-09-24]. https://arXiv.org/abs/2003.02245. [61] MIANA S, IVANOV R, GALLAGHER E, et al. The augmentation of large language models with random conceptual augmentation: an empirical investigation using open-source LLMs[EB/OL]. (2024-01-01) [2025-02-10]. https://www.researchgate.net. [62] MOHAMMAD S M. Sentiment analysis: detecting valence, emotions, and other affectual states from text[EB/OL]. [2024-09-24]. https://arXiv.org/abs/2005.11882. [63] YU Y, ZHUANG Y, ZHANG J, et al. Large language model as attributed training data generator: a tale of diversity and bias[C]//Advances in Neural Information Processing Systems 36, 2023. [64] TSOUMAKAS G, KATAKIS I. Multi-label classification: an overview[J]. International Journal of Data Warehousing and Mining, 2007, 3(3): 13. [65] CHOWDHURY A G, CHADHA A. Generative data augmentation using LLMs improves distributional robustness in question answering[EB/OL]. [2024-09-24]. https://arXiv.org/abs/2309.06358. [66] RAJPURKAR P. Squad: 100,000+ questions for machine comprehension of text[EB/OL]. [2024-09-24]. https://arXiv.org/abs/1606.05250. [67] CHEN X, JIANG J Y, CHANG W C, et al. MinPrompt: graph-based minimal prompt data augmentation for few-shot question answering[EB/OL]. [2024-09-24]. https://arXiv.org/abs/2310.05007. [68] LESTER B, AL-RFOU R, CONSTANT N, et al. The power of scale for parameter-efficient prompt tuning[EB/OL]. [2024-09-24]. https://arXiv.org/abs/2104.08691. [69] ZHANG Y Y, LI P F, LAI Y L, et al. Large, small or both: a novel data augmentation framework based on language models for debiasing opinion summarization[EB/OL]. [2024-09-24]. https://arXiv.org/abs/2403.07693. [70] SAHU G, RODRIGUEZ P, LARADJI I H, et al. Data augmentation for intent classification with off-the-shelf large language models[EB/OL]. [2024-09-24]. https://arXiv.org/abs/2204.01959. [71] LATIF S, USAMA M, MALIK M I, et al. Can large language models aid in annotating speech emotional data? Uncovering new frontiers[EB/OL]. [2024-09-24]. https://arXiv.org/abs/2307.06090. [72] YUAN J, TANG R, JIANG X, et al. LLM for patient-trial matching: privacy-aware data augmentation towards better performance and generalizability[C]//Proceedings of the American Medical Informatics Association Annual Symposium, 2023. [73] GUO B Y, GONG Y Y, SHEN Y L, et al. GENIUS: sketch-based language model pre-training via extreme and selective masking for text generation and augmentation[EB/OL]. [2024-09-24]. https://arXiv.org/abs/2211.10330. [74] ZHONG Q H, LI H Y, ZHUANG L Y, et al. Iterative data generation with large language models for aspect-based sentiment analysis[EB/OL]. [2024-09-24]. https://arXiv.org/abs/2407.00341. [75] PONTIKI M, GALANIS D, PAPAGEORGIOU H, et al. SemEval-2015 Task 12: aspect based sentiment analysis[C]//Proceedings of the 9th International Workshop on Semantic Evaluation. Stroudsburg: ACL, 2015: 486-495. [76] PONTIKI M, GALANIS D, PAPAGEORGIOU H, et al. Semeval-2016 Task 5: aspect based sentiment analysis[C]//Proceedings of the 10th International Workshop on Semantic Evaluation. Stroudsburg: ACL, 2016: 19-30. [77] DAI H X, LIU Z L, LIAO W X, et al. AugGPT: leveraging ChatGPT for text data augmentation[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2302.13007. [78] CHEN T, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[C]//Proceedings of the 37th International Conference on Machine Learning, 2020: 1597-1607. [79] CHEN Z P, ZHOU K, ZHANG B C, et al. ChatCoT: tool-augmented chain-of-thought reasoning on chat-based large language models[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2305.14323. [80] HENDRYCKS D, BURNS C, KADAVATH S, et al. Measuring mathematical problem solving with the MATH dataset[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2103.03874. [81] YANG Z L, QI P, ZHANG S Z, et al. HotpotQA: a dataset for diverse, explainable multi-hop question answering[EB/OL]. [2024-09-29]. https://arXiv.org/abs/1809.09600. [82] LU H Y, LAM W. EPA: easy prompt augmentation on large language models via multiple sources and multiple targets[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2309.04725. [83] ZHANG D, LI T, ZHANG H, et al. On data augmentation for extreme multi-label classification[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2009.10778. [84] COSTA-JUSSà M R, CROSS J, ?ELEBI O, et al. No language left behind: scaling human-centered machine translation[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2207.04672. [85] GLIWA B, MOCHOL I, BIESEK M, et al. SAMSum corpus: a human-annotated dialogue dataset for abstractive summarization[EB/OL]. [2024-09-29]. https://arXiv.org/abs/1911.12237. [86] GONG S S, LI M K, FENG J T, et al. DiffuSeq: sequence to sequence text generation with diffusion models[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2210.08933. [87] BOWMAN S R, ANGELI G, POTTS C, et al. A large annotated corpus for learning natural language inference[EB/OL]. [2024-09-29]. https://arXiv.org/abs/1508.05326. [88] POPOVI? M. chrF: character n-gram F-score for automatic MT evaluation[C]//Proceedings of the 10th Workshop on Statistical Machine Translation. Stroudsburg: ACL, 2015: 392-395. [89] WILLIAMS A, NANGIA N, BOWMAN S R. A broad-coverage challenge corpus for sentence understanding through inference[EB/OL]. [2024-09-29]. https://arXiv.org/abs/1704. 05426. [90] SAAKYAN A, MURESAN S. ICLEF: in-context learning with expert feedback for explainable style transfer[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2309.08583. [91] SCHLEGEL V, LI H, WU Y P, et al. PULSAR at MEDIQA-sum 2023: large language models augmented by synthetic dialogue convert patient dialogues to medical records[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2307.02006. [92] LI Z, SI L J, GUO C L, et al. Data augmentation for text-based person retrieval using large language models[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2405.11971. [93] TIAN Y M, LI Y B, WANG D, et al. Enhancing CLIP-based text-person retrieval by leveraging negative samples[C]//Proceedings of the 2023 Chinese Conference on Pattern Recognition and Computer Vision. Singapore: Springer, 2023: 271-283. [94] LI S, XIAO T, LI H S, et al. Person search with natural language description[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 5187-5196. [95] DING Z F, DING C X, SHAO Z Y, et al. Semantically self-aligned network for text-to-image part-aware person re-identification[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2107.12666. [96] ZHU A C, WANG Z J, LI Y F, et al. DSSL: deep surroundings-person separation learning for text-based person retrieval[C]//Proceedings of the 29th ACM International Conference on Multimedia. New York: ACM, 2021: 209-217. [97] ZHOU J, ZHENG Y N, TANG J, et al. FlipDA: effective and robust data augmentation for few-shot learning[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2108.06332. [98] COLIN R, NOAM S, ADAM R, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2020, 21. [99] WANG A, PRUKSACHATKUN Y, NANGIA N, et al. Super-GLUE: a stickier benchmark for general-purpose language understanding systems[C]//Advances in Neural Information Processing Systems 32, 2019: 3261-3275. [100] LIU Z H, ZHU T, XIANG J X, et al. Controllable and diverse data augmentation with large language model for low-resource open-domain dialogue generation[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2404.00361. [101] LI Y R, SU H, SHEN X Y, et al. DailyDialog: a manually labelled multi-turn dialogue dataset[EB/OL]. [2024-09-29]. https://arXiv.org/abs/1710.03957. [102] GAO J H, PI R J, LIN Y, et al. Self-guided noise-free data generation for efficient zero-shot learning[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2205.12679. [103] YE J C, GAO J H, LI Q T, et al. ZeroGen: efficient zero-shot learning via dataset generation[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2202.07922. [104] MAAS A, DALY R E, PHAM P T, et al. Learning word vectors for sentiment analysis[C]//Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2011: 142-150. [105] PANG B, LEE L. Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales[C]//Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Stroudsburg: ACL, 2005: 115-124. [106] SOCHER R, PERELYGIN A, WU J, et al. Recursive deep models for semantic compositionality over a sentiment treebank[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2013: 1631-1642. [107] ZHANG X, ZHAO J, LECUN Y. Character-level convolutional networks for text classification[C]//Advances in Neural Information Processing Systems 28, 2015: 649-657. [108] UBANI S, POLAT S O, NIELSEN R. ZeroShotDataAug: generating and augmenting training data with ChatGPT[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2304.14334. [109] LI X, ROTH D. Learning question classifiers: the role of semantic information[J]. Natural Language Engineering, 2006, 12(3): 229-249. [110] CHENG Q Y, LI X N, LI S M, et al. Unified active retrieval for retrieval augmented generation[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2406.12534. [111] HUANG L K, HUANG J, RONG Y, et al. Frustratingly easy transferability estimation[C]//Proceedings of the 39th International Conference on Machine Learning, 2022: 9201-9225. [112] GUNAWARDANA A, SHANI G, YOGEV S. Evaluating recommender systems[M]//Recommender systems handbook. New York: Springer US, 2012: 547-601. [113] HOSSIN M, SULAIMAN M N. A review on evaluation metrics for data classification evaluations[J]. International Journal of Data Mining & Knowledge Management Process, 2015, 5(2): 1-11. [114] IACUS S M, KING G, PORRO G. Causal inference without balance checking: coarsened exact matching[J]. Political Analysis, 2012, 20(1): 1-24. [115] POWERS D M W. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2010.16061. [116] PAPINENI K, ROUKOS S, WARD T, et al. Bleu: a method for automatic evaluation of machine translation[C]//Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2002: 311-318. [117] LIN C Y. ROUGE: a package for automatic evaluation of summaries[C]//Proceedings of the 2004 Workshop on Text Summarization of ACL. Stroudsburg: ACL, 2004. [118] DE WINTER J C F, GOSLING S D, POTTER J. Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: a tutorial using simulations and empirical data[J]. Psychological Methods, 2016, 21(3): 273-290. [119] KUNTZ J. Markov chains revisited[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2001.02183. [120] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning, 2017: 1126-1135. [121] SNELL J, SWERSKY K, ZEMEL R. Prototypical networks for few-shot learning[C]//Advances in Neural Information Processing Systems 30, 2017: 4077-4087. [122] HOSPEDALES T, ANTONIOU A, MICAELLI P, et al. Meta-learning in neural networks: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(9): 5149-5169. [123] RAO S Z, HUANG J. Leveraging enhanced task embeddings for generalization in multimodal meta-learning[J]. Neural Computing and Applications, 2023, 35(15): 10765-10778. [124] THOMAS R P, LAWRENCE A. Assessment of expert performance compared across professional domains[J]. Journal of Applied Research in Memory and Cognition, 2018, 7(2): 167-176. [125] GHOJOGH B, NEKOEI H, GHOJOGH A, et al. Sampling algorithms, from survey sampling to Monte Carlo methods: tutorial and literature review[EB/OL]. [2024-09-29]. https://arXiv.org/abs/2011.00901. |
| [1] | XU Guangyuan, ZHANG Yaqiang, SHI Hongzhi. Review of Fault-Tolerant Technologies for Large-Scale DNN Training Scenarios [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(7): 1771-1788. |
| [2] | CHEN Xu, ZHANG Qi, WANG Shuyang, JING Yongjun. Adaptive Product Space Discrete Dynamic Graph Link Prediction Model [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(7): 1820-1831. |
| [3] | XIA Jianglan, LI Yanling, GE Fengpei. Survey of Entity Relation Extraction Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(7): 1681-1698. |
| [4] | ZHOU Kaijun, LIAO Ting, TAN Ping, SHI Changfa. Review of Research on Image Compression Techniques [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(7): 1699-1728. |
| [5] | ZHANG Xin, SUN Jingchao. Review of False Information Detection Frameworks Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1414-1436. |
| [6] | LI Yunfei, WEI Xia, CAI Xin, LYU Mingyu, LUO Xianghan. TCTP-YOLO: Typical Obstacles and Traffic Sign Detection Methods for Blind Pedestrians [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1540-1552. |
| [7] | LYU Fu, ZHENG Yu, QI Guangyao, LI Haoran. Lightweight SAR Image Ship Oblique Frame Detection Algorithm Based on Polar Coordinate Encoding and Decoding [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1564-1579. |
| [8] | ZHOU Nan, DONG Yongquan, YAN Linke, JIN Jiayong, HE Bugui. Research on Exercise Recommendation Fusing Student Knowledge State and Chaotic Firefly Algorithm [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1620-1631. |
| [9] | ZHU Jiayin, LI Yang, LI Ming, MA Jingang. Review of Application of Deep Learning in Cervical Cell Segmentation [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1476-1493. |
| [10] | LIANG Jiexin, FENG Yue, LI Jianzhong, CHEN Tao, LIN Zhuosheng, HE Ying, WANG Songbai. Survey on Intelligent Identification of Constitution in Traditional Chinese Medicine [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1455-1475. |
| [11] | LI Shaobo, WANG Xiaoqiang, GUO Libiao, HONG Ying, WANG Zhiguo. Review of Deep Learning Applications in Unmanned Aerial Vehicle Remote Sensing Images of Grass Plants [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(5): 1157-1176. |
| [12] | LI Guowei, LIU Jing, CAO Hui, JIANG Liang. Research Review of Deep Learning in Colon Polyp Image Segmentation [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(5): 1198-1216. |
| [13] | YANG Zhiyong, GUO Jieru, GUO Zihang, ZHANG Ruixiang, ZHOU Yu. Review of Research on Trajectory Prediction of Road Pedestrian Behavior [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(5): 1177-1197. |
| [14] | LI Juhao, SHI Lei, DING Meng, LEI Yongsheng, ZHAO Dongyue, CHEN Long. Social Media Text Stance Detection Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(5): 1302-1312. |
| [15] | WANG Ning, ZHI Min. Review of One-Stage Universal Object Detection Algorithms in Deep Learning [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(5): 1115-1140. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/