
Journal of Frontiers of Computer Science and Technology ›› 2025, Vol. 19 ›› Issue (6): 1414-1436.DOI: 10.3778/j.issn.1673-9418.2411001
• Frontiers·Surveys • Previous Articles Next Articles
ZHANG Xin,SUN Jingchao
Online:2025-06-01
Published:2025-05-29
张欣, 孙靖超
ZHANG Xin, SUN Jingchao. Review of False Information Detection Frameworks Based on Large Language Models[J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1414-1436.
张欣, 孙靖超. 基于大语言模型的虚假信息检测框架综述[J]. 计算机科学与探索, 2025, 19(6): 1414-1436.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2411001
| [1] DON F. What is disinformation?[J]. Library Trends, 2015, 63(3): 401-426. [2] GUO B, DING Y S, YAO L N, et al. The future of false information detection on social media[J]. ACM Computing Surveys, 2021, 53(4): 1-36. [3] HU Y J, JU X Y, YE Z S, et al. Early rumor detection based on data augmentation and pre-training transformer[C]//Proceedings of the 2022 IEEE 12th Annual Computing and Communication Workshop and Conference. Piscataway: IEEE, 2022: 152-158. [4] ZHOU X Y, JAIN A, PHOHA V V, et al. Fake news early detection: an interdisciplinary study[EB/OL]. [2024-08-10]. https://arxiv.org/abs/1904.11679. [5] HABIB A, ASGHAR M Z, KHAN A, et al. False information detection in online content and its role in decision making: a systematic literature review[J]. Social Network Analysis and Mining, 2019, 9(1): 50. [6] CAPUANO N, FENZA G, LOIA V, et al. Content-based fake news detection with machine and deep learning: a systematic review[J]. Neurocomputing, 2023, 530: 91-103. [7] MRIDHA M F, KEYA A J, HAMID M A, et al. A comprehensive review on fake news detection with deep learning[J]. IEEE Access, 2021, 9: 156151-156170. [8] LUO Z R, LI Q Q, ZHENG J. Deep feature fusion for rumor detection on twitter[J]. IEEE Access, 2021, 9: 126065-126074. [9] 朱奕, 王根生, 金文文, 等. 基于文本语义增强和评论立场加权的网络谣言检测[J]. 计算机科学与探索, 2024, 18(12): 3311-3323. ZHU Y, WANG G S, JIN W W, et al. Network rumor detection based on enhanced textual semantics and weighted comment stance[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(12): 3311-3323. [10] CHEN C Y, SHU K. Combating misinformation in the age of LLMs: opportunities and challenges[J]. AI Magazine, 2024, 45(3): 354-368. [11] LIU Y, ZHU J J, LIU X K, et al. Detect, investigate, judge and determine: a knowledge-guided framework for few-shot fake news detection[EB/OL]. [2024-08-10]. https://arxiv.org/ abs/2407.08952. [12] YANG Y Z, ZHOU Y M, YING Q C, et al. Fact-checking based fake news detection: a review[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2401.01717. [13] ANGGRAININGSIH R, HASSAN G M, DATTA A. Evaluating BERT-based pre-training language models for detecting misinformation[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2203.07731. [14] SLIMI H, BOUNHAS I, SLIMANI Y. Adapting pre-trained language models to rumor detection on Twitter[J]. Journal of Universal Computer Science, 2021, 27(10): 1128-1148. [15] LUND B D, WANG T. Chatting about ChatGPT: how may AI and GPT impact academia and libraries?[J]. Library Hi Tech News, 2023, 40(3): 26-29. [16] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: open and efficient foundation language models[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2302.13971. [17] CHOWDHERY A, NARANG S R, DEVLIN J, et al. PaLM: scaling language modeling with pathways[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2204.02311. [18] QI P, YAN Z H, HSU W, et al. SNIFFER: multimodal large language model for explainable out-of-context misinformation detection[C]//Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 13052-13062. [19] LIU Z W, YANG K L, XIE Q Q, et al. RAEmoLLM: retrieval augmented LLMs for cross-domain misinformation detection using in-context learning based on emotional information[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2406. 11093. [20] LI X Y, ZHANG Y F, MALTHOUSE E C. Large language model agent for fake news detection[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2405.01593. [21] KOKA S, VUONG A, KATARIA A. Evaluating the efficacy of large language models in detecting fake news: a comparative analysis[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2406.06584. [22] BOISSONNEAULT D, HENSEN E. Fake news detection with large language models on the LIAR dataset[Z]. 2024. [23] RONY M M U, HAQUE M M, ALI M, et al. Exploring the potential of the large language models (LLMs) in identifying misleading news headlines[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2405.03153. [24] TAN X, ZOU B W, AW A T. Evidence-based interpretable open-domain fact-checking with large language models[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2312.05834. [25] WANG H R, SHU K. Explainable claim verification via knowledge-grounded reasoning with large language models[C]//Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg: ACL, 2023: 6288-6304. [26] LUCAS J, UCHENDU A, YAMASHITA M, et al. Fighting fire with fire: the dual role of LLMs in crafting and detecting elusive disinformation[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2310.15515. [27] PAVLYSHENKO B M. Analysis of disinformation and fake news detection using fine-tuned large language model[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2309.04704. [28] FREEDMAN G, DEJL A, GORUR D, et al. Argumentative large language models for explainable and contestable decision-making[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2405.02079. [29] PENDYALA V S, HALL C E. Explaining misinformation detection using large language models[J]. Electronics, 2024, 13(9): 1673. [30] LAI J Q, YANG X R, LUO W Y, et al. RumorLLM: a rumor large language model-based fake-news-detection data-augmentation approach[J]. Applied Sciences, 2024, 14(8): 3532. [31] ZHANG L, LIU J W. Study on the effectiveness of multilingual models in fake news classification[C]//Proceedings of the 2024 IEEE 7th Information Technology, Networking, Electronic and Automation Control Conference. Piscataway: IEEE, 2024: 1516-1522. [32] 陈静, 周刚, 李顺航, 等. 社交媒体虚假信息检测研究综述[J]. 计算机科学, 2024, 51(11): 1-14. CHEN J, ZHOU G, LI S H, et al. Review of fake news detection on social media[J]. Computer Science, 2024, 51(11): 1-14. [33] LIU Y, ITER D, XU Y C, et al. G-Eval: NLG evaluation using GPT-4 with better human alignment[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2303.16634. [34] KIM K, LEE S Y, HUANG K H, et al. Can LLMs produce faithful explanations for fact-checking?towards faithful explainable fact-checking via multi-agent debate[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2402.07401. [35] SZCZEPA?SKI M, PAWLICKI M, KOZIK R, et al. New explainability method for BERT-based model in fake news detection[J]. Scientific Reports, 2021, 11(1): 23705. [36] LI Q Z, ZHANG Q, SI L, et al. Rumor detection on social media: datasets, methods and opportunities[C]//Proceedings of the 2nd Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda. Stroudsburg: ACL, 2019: 66-75. [37] PAN L M, WU X B, LU X Y, et al. Fact-checking complex claims with program-guided reasoning[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 6981-7004. [38] LI M R, PENG B L, GALLEY M, et al. Self-checker: plug-and-play modules for fact-checking with large language models[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2305.14623. [39] GUAN J, DODGE J, WADDEN D, et al. Language models hallucinate, but may excel at fact verification[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2310.14564. [40] WAN H R, FENG S B, TAN Z X, et al. DELL: generating reactions and explanations for LLM-based misinformation detection[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2402.10426. [41] YANG C, ZHANG P, QIAO W B, et al. Rumor detection on social media with crowd intelligence and ChatGPT-assisted networks[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 5705-5717. [42] HU B Z, SHENG Q, CAO J, et al. Bad actor, good advisor: exploring the role of large language models in fake news detection[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38(20): 22105-22113. [43] LIU H, WANG W Y, LI H R, et al. TELLER: a trustworthy framework for explainable, generalizable and controllable fake news detection[EB/OL]. [2024-08-10]. https://arxiv.org/ abs/2402.07776. [44] PARK S, HAN S, CHA M. Adversarial style augmentation via large language model for robust fake news detection[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2406.11260. [45] NAN Q, SHENG Q, CAO J, et al. Let silence speak: enhancing fake news detection with generated comments from large language models[C]//Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. New York: ACM, 2024: 1732-1742. [46] CHEUNG T H, LAM K M. FactLLaMA: optimizing instruction-following language models with external knowledge for automated fact-checking[C]//Proceedings of the 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference. Piscataway: IEEE, 2023: 846-853. [47] HUANG K H, MCKEOWN K, NAKOV P, et al. Faking fake news for real fake news detection: propaganda-loaded training data generation[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 14571-14589. [48] 柯婧, 谢哲勇, 徐童, 等. 基于大语言模型隐含语义增强的细粒度虚假新闻检测方法[J]. 计算机研究与发展, 2024, 61(5): 1250-1260. KE J, XIE Z Y, XU T, et al. An implicit semantic enhanced fine-grained fake news detection method based on large language models[J]. Journal of Computer Research and Development, 2024, 61(5): 1250-1260. [49] AZRI A, FAVRE C, HARBI N, et al. DAT@Z21: a comprehensive multimodal dataset for rumor classification in Micro-blogs[C]//Proceedings of the 25th International Conference on Big Data Analytics and Knowledge Discovery. Cham: Springer, 2023: 161-175. [50] HE Y, WU Y H, ZHANG J R, et al. LTCR: long temporal characteristic reconstruction for segmentation in contrastive learning[C]//Proceedings of the Research Track-European Conference on Machine Learning and Knowledge Discovery in Databases. Cham: Springer, 2024: 355-371. [51] SUNDRIYAL M, CHAKRABORTY T, NAKOV P. From chaos to clarity: claim normalization to empower fact-checking [EB/OL]. [2024-08-10]. https://arxiv.org/abs/2310.14338. [52] ZUBIAGA A, LIAKATA M, PROCTER R, et al. Analysing how people orient to and spread rumours in social media by looking at conversational threads[J]. PLoS One, 2016, 11(3): e0150989. [53] KOCHKINA E, LIAKATA M, ZUBIAGA A. All-in-one: multi-task learning for rumour verification[EB/OL]. [2024-08-10]. https://arxiv.org/abs/1806.03713. [54] MA J, GAO W, WONG K F. Detect rumors in microblog posts using propagation structure via kernel learning[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2017: 708-717. [55] SINGER-VINE C S, STRAPAGIEL L, SHABAN H, et al. Hyperpartisan facebook pages are publishing false and misleading information at an alarming rate[EB/OL]. (2016-10-20)[2024-08-10]. https://www.buzzfeednews.com/article/craigsilverman/partisan-fb-pages-analysis. [56] GORRELL G, KOCHKINA E, LIAKATA M, et al. SemEval-2019 task 7: rumoureval, determining rumour veracity and support for rumours[C]//Proceedings of the 13th International Workshop on Semantic Evaluation. Stroudsburg: ACL, 2019: 845-854. [57] SHU K, MAHUDESWARAN D, WANG S H, et al. FakeNewsNet: a data repository with news content, social context, and spatiotemporal information for studying fake news on social media[J]. Big Data, 2020, 8(3): 171-188. [58] THORNE J, VLACHOS A, CHRISTODOULOPOULOS C, et al. FEVER: a large-scale dataset for fact extraction and verification[EB/OL]. [2024-08-10]. https://arxiv.org/abs/1803. 05355. [59] WANG L Z, MA Y M, GAO R F, et al. MegaFake: a theory-driven dataset of fake news generated by large language models[EB/OL]. [2024-08-25]. https://arxiv.org/abs/2408. 11871. [60] HUANG Y, SUN L C. FakeGPT: fake news generation, explanation and detection of large language models[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2310.05046. [61] LI Z Z, ZHANG H P, ZHANG J W. A revisit of fake news dataset with augmented fact-checking by ChatGPT[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2312.11870. [62] CAO Y P, NAIR A M, EYIMIFE E, et al. Can large language models detect misinformation in scientific news reporting?[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2402.14268. [63] WANG L Z, XU X H, ZHANG L, et al. MMIDR: teaching large language model to interpret multimodal misinformation via knowledge distillation[EB/OL]. [2024-08-10]. https:// arxiv.org/abs/2403.14171. [64] WEI J, ZOU K. EDA: easy data augmentation techniques for boosting performance on text classification tasks[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 6381-6387. [65] AMJAD M, SIDOROV G, ZHILA A. Data augmentation using machine translation for fake news detection in the Urdu language[C]//Proceedings of the 12th Language Resources and Evaluation Conference, 2020: 2537-2542. [66] SALAH I, JOUINI K, KORBAA O. Augmentation-based ensemble learning for stance and fake news detection[C]//Advances in Computational Collective Intelligence. Cham: Springer, 2022: 29-41. [67] DAHOU A H, CHERAGUI M A, ABDEDAIEM A, et al. Enhancing model performance through translation-based data augmentation in the context of fake news detection[J]. Procedia Computer Science, 2024, 244: 342-352. [68] 蒋超, 朱学芳. 基于GPT-4数据增强与对比学习的多模态谣言检测研究[J]. 图书情报工作, 2024,68(23): 76-87. JIANG C, ZHU X F. Research on multimodal rumor detection based on GPT-4 text augmentation and contrastive learning[J]. Library and Information Service, 2024, 68(23): 76-87. [69] WU J Y, GUO J F, HOOI B. Fake news in sheep’s clothing: robust fake news detection against LLM-empowered style attacks[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2024: 3367-3378. [70] WU Y, XIAO Y, HU M, et al. Towards robust evidence-aware fake news detection via improving semantic perception[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Torino: ELRA and ICCL, 2024: 16607-16618. [71] 姜雨杉, 张仰森. 大语言模型驱动的立场感知事实核查[J]. 计算机应用, 2024, 44(10): 3067-3073. JIANG Y S, ZHANG Y S. Large language model-driven stance-aware fact-checking[J]. Journal of Computer Applications, 2024, 44(10): 3067-3073. [72] WANG K, ZHU J H, REN M J, et al. A survey on data synthesis and augmentation for large language models[EB/OL]. [2024-10-30]. https://arxiv.org/abs/2410.12896. [73] JIANG Y, WANG T H, XU X M, et al. Cross-modal augmentation for few-shot multimodal fake news detection[J]. Engineering Applications of Artificial Intelligence, 2025, 142: 109931. [74] LIU Z C, TANG Z Q, SHI X J, et al. Learning multimodal data augmentation in feature space[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2212.14453. [75] SI L, GUO C, LI Z, et al. A unified framework of data augmentation using large language models for text-based cross-modal retrieval[EB/OL]. [2024-08-10]. http://dx.doi.org/10.2139/ssrn.4957742. [76] THABTAH F, HAMMOUD S, KAMALOV F, et al. Data imbalance in classification: experimental evaluation[J]. Information Sciences, 2020, 513: 429-441. [77] JIA S, HUANG M Z, ZHOU Z, et al. AutoSplice: a text-prompt manipulated image dataset for media forensics[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2023: 893-903. [78] YE Q H, XU H Y, YE J B, et al. mPLUG-OwI2: revolutionizing multi-modal large language model with modality collaboration[C]//Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 13040-13051. [79] LEITE J A, RAZUVAYEVSKAYA O, BONTCHEVA K, et al. Weakly supervised veracity classification with LLM-predicted credibility signals[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2309.07601. [80] CHERN I C, CHERN S, CHEN S Q, et al. FacTool: factuality detection in generative AI: a tool augmented framework for multi-task and multi-domain scenarios[EB/OL]. [2024-08-10]. https://arxiv.org/abs/2307.13528. [81] ZHANG X, GAO W. Reinforcement retrieval leveraging fine-grained feedback for fact checking news claims with black-box LLM[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Torino: ELRA and ICCL, 2024: 13861-13873. [82] ZHANG X, GAO W. Towards LLM-based fact verification on news claims with a hierarchical step-by-step prompting method[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2310.00305. [83] MUTLAG W K, ALI S K, AYDAM Z M, et al. Feature extraction methods: a review[J]. Journal of Physics: Conference Series, 2020, 1591(1): 012028. [84] WU G Y, WU W J, LIU X H, et al. Cheap-fake detection with LLM using prompt engineering[C]//Proceedings of the 2023 IEEE International Conference on Multimedia and Expo Workshops. Piscataway: IEEE, 2023: 105-109. [85] CASTELO S, ALMEIDA T, ELGHAFARI A, et al. A topic-agnostic approach for identifying fake news pages[C]//Proceedings of the Companion of the 2019 World Wide Web Conference. New York: ACM, 2019: 975-980. [86] LIU Y, WU Y F. Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence. New York:ACM, 2018: 354-361. [87] LEE N, LI B Z, WANG S N, et al. Language models as fact checkers?[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2006. 04102. [88] XUAN K Y, YI L, YANG F, et al. LEMMA: towards LVLM-enhanced multimodal misinformation detection with external knowledge augmentation[EB/OL]. [2024-08-13]. https://arxiv. org/abs/2402.11943. [89] ZHOU X Y, ZAFARANI R. Network-based fake news detection: a pattern-driven approach[J]. ACM SIGKDD Explorations Newsletter, 2019, 21(2): 48-60. [90] 杨昱洲, 周杨铭, 应祺超, 等. 基于事实信息核查的虚假新闻检测综述[J]. 中国传媒大学学报(自然科学版), 2023, 30(6): 28-36. YANG Y Z, ZHOU Y M, YING Q C, et al. A survey on fake news detection based on fact verification[J]. Journal of Communication University of China (Science and Technology), 2023, 30(6): 28-36. [91] DONG M W, CHRISTODOULOPOULOS C, SHIH S M, et al. Robust information retrieval for false claims with distracting entities in fact extraction and verification[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2112.07618. [92] LIAO H, PENG J H, HUANG Z Y, et al. MUSER: a multi-step evidence retrieval enhancement framework for fake news detection[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2023: 4461-4472. [93] YANG Z W, MA J, CHEN H C, et al. A coarse-to-fine cascaded evidence-distillation neural network for explainable fake news detection[EB/OL]. [2024-08-13]. https://arxiv.org/ abs/2209.14642. [94] QUELLE D, BOVET A. The perils and promises of fact-checking with large language models[J]. Frontiers in Artificial Intelligence, 2024, 7: 1341697. [95] GAO Y F, XIONG Y, GAO X Y, et al. Retrieval-augmented generation for large language models: a survey[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2312.10997. [96] LEWIS P, PEREZ E, PIKTUS A, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks[C]//Advances in Neural Information Processing Systems 33, 2020: 9459-9474. [97] IZACARD G, LEWIS P, LOMELI M, et al. Atlas: few-shot learning with retrieval augmented language models[J]. Journal of Machine Learning Research, 2023, 24(251): 1-43. [98] THORNE J, VLACHOS A, COCARASCU O, et al. The fact extraction and VERification (FEVER) shared task[EB/OL]. [2024-08-13]. https://arxiv.org/abs/1811.10971. [99] RAE J W, BORGEAUD S, CAI T, et al. Scaling language models: methods, analysis & insights from training gopher[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2112.11446. [100] ASAI A, WU Z Q, WANG Y Z, et al. Self-RAG: learning to retrieve, generate, and critique through self-reflection[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2310.11511. [101] SHAO Z H, GONG Y Y, SHEN Y L, et al. Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy[EB/OL]. [2024-08-13]. https:// arxiv.org/abs/2305.15294. [102] ZAMANI H, BENDERSKY M. Stochastic RAG: end-to-end retrieval-augmented generation through expected utility maximization[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 2641-2646. [103] KHALIQ M A, CHANG P, MA M, et al. RAGAR, your falsehood radar: rag-augmented reasoning for political fact-checking using multimodal large language models[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2404.12065. [104] TAHMASEBI S, MüLLER-BUDACK E, EWERTH R. Multimodal misinformation detection using large vision-language models[C]//Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. New York: ACM, 2024: 2189-2199. [105] GLASS M, ROSSIELLO G, CHOWDHURY M F M, et al. Re2G: retrieve, rerank, generate[EB/OL]. [2024-08-13]. https:// arxiv.org/abs/2207.06300. [106] HOFST?TTER S, CHEN J C, RAMAN K, et al. FiD-light: efficient and effective retrieval-augmented text generation[C]//Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2023: 1437-1447. [107] LI G H, LU W S, ZHANG W, et al. Re-search for the truth: multi-round retrieval-augmented large language models are strong fake news detectors[EB/OL]. [2024-08-13]. https:// arxiv.org/abs/2403.09747. [108] KHATTAB O, POTTS C, ZAHARIA M. Baleen: robust multi-hop reasoning at scale via condensed retrieval[C]//Advances in Neural Information Processing Systems 34, 2021: 27670-27682. [109] REYNOLDS L, MCDONELL K. Prompt programming for large language models: beyond the few-shot paradigm[C]//Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. New York: ACM, 2021: 1-7. [110] DOUGREZ-LEWIS J, AKHTER M E, RUGGERI F, et al. Assessing the reasoning capabilities of LLMs in the context of evidence-based claim verification[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2402.10735. [111] PARVEZ M R. Chain of evidences and evidence to generate: prompting for context grounded and retrieval augmented reasoning[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2401.05787. [112] ZENG F Z, GAO W. Prompt to be consistent is better than self-consistent? few-shot and zero-shot fact verification with pre-trained language models[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2306.02569. [113] LIU Q, TAO X, WU J F, et al. Can large language models detect rumors on social media?[EB/OL]. [2024-08-13]. https:// arxiv.org/abs/2402.03916. [114] ZARHARAN M, WULLSCHLEGER P, KIA B B, et al. Tell me why: explainable public health fact-checking with large language models[EB/OL]. [2024-08-13]. https://arxiv. org/abs/2405.09454. [115] PAN L M, LU X Y, KAN M Y, et al. QACHECK: a demonstration system for question-guided multi-hop fact-checking [EB/OL]. [2024-08-13]. https://arxiv.org/abs/2310.07609. [116] CHIANG S H, LO M C, CHAO L W, et al. Team trifecta at Factify5WQA: setting the standard in fact verification with fine-tuning[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2403.10281. [117] CHEN M, WEI L, CAO H, et al. Can large language models understand content and propagation for misinformation detection: an empirical study[EB/OL]. [2024-08-13]. https:// arxiv.org/abs/2311.12699. [118] CAO H, WEI L W, CHEN M Y, et al. Are large language models good fact checkers: a preliminary study[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2311.17355. [119] CAMBRIA E, MALANDRI L, MERCORIO F, et al. XAI meets LLMs: a survey of the relation between explainable AI and large language models[EB/OL]. [2024-08-13]. https:// arxiv.org/abs/2407.15248. [120] WU X S, ZHAO H Y, ZHU Y C, et al. Usable XAI: 10 strategies towards exploiting explainability in the LLM era[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2403.08946. [121] ADADI A, BERRADA M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)[J]. IEEE Access, 2018, 6: 52138-52160. [122] ANGELOV P P, SOARES E A, JIANG R, et al. Explainable artificial intelligence: an analytical review[J]. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2021, 11(5): e1424. [123] FU J L, NG S K, JIANG Z B, et al. GPTScore: evaluate as you desire[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2302. 04166. [124] PENG B L, GALLEY M, HE P C, et al. Check your facts and try again: improving large language models with external knowledge and automated feedback[EB/OL]. [2024-08-13]. https://arxiv.org/abs/2302.12813. [125] SAMEK W, MüLLER K R. Towards explainable artificial intelligence[M]//Explainable AI: interpreting, explaining and visualizing deep learning. Cham: Springer, 2019: 5-22. [126] BARREDO ARRIETA A, DíAZ-RODRíGUEZ N, DEL SER J, et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI[J]. Information Fusion, 2020, 58: 82-115. [127] WU Z Q, HU Y S, SHI W J, et al. Fine-grained human feedback gives better rewards for language model training[C]//Advances in Neural Information Processing Systems 36, 2023: 59008-59033. [128] CUI G, YUAN L, DING N, et al. ULTRAFEEDBACK: boosting language models with scaled AI feedback[C]//Proceedings of the 41st International Conference on Machine Learning, 2024. [129] BANERJEE T, ZHU R, YANG R Z, et al. LLMs are superior feedback providers: bootstrapping reasoning for lie detection with self-generated feedback[EB/OL]. [2024-08-30]. https://arxiv.org/abs/2408.13915. |
| [1] | XIA Jianglan, LI Yanling, GE Fengpei. Survey of Entity Relation Extraction Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(7): 1681-1698. |
| [2] | XU Delong, LIN Min, WANG Yurong, ZHANG Shujun. Survey of NLP Data Augmentation Methods Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1395-1413. |
| [3] | LI Juhao, SHI Lei, DING Meng, LEI Yongsheng, ZHAO Dongyue, CHEN Long. Social Media Text Stance Detection Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(5): 1302-1312. |
| [4] | CHANG Baofa, CHE Chao, LIANG Yan. Research on Recommendation Model Based on Multi-round Dialogue of Large Language Model [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(2): 385-395. |
| [5] | YU Fengrui, DU Yanhui. Research on Generative Techniques for Identifying and Extracting Tactics, Techniques and Procedures [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 118-131. |
| [6] | XU Lei, HU Yahao, CHEN Man, CHEN Jun, PAN Zhisong. Hate Speech Detection Method Integrating Prefix Tuning and Prompt Learning [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 97-106. |
| [7] | LI Boxin. Method of Retrieval-Augmented Large Language Models with Stable Outputs for Private Question-Answering Systems [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 132-140. |
| [8] | WANG Yong, QIN Jiajun, HUANG Yourui, DENG Jiangzhou. Design of University Research Management Question Answering System Integrating Knowledge Graph and Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 107-117. |
| [9] | JIANG Yuqi, HOU Zhiwen, WANG Yifan, ZHAI Hanming, BU Fanliang. Research on Processing and Application of Imbalanced Textual Data on Social Platforms [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(9): 2370-2383. |
| [10] | ZHAO Honglei, TANG Huanling, ZHANG Yu, SUN Xueyuan, LU Mingyu. Named Entity Recognition Model Based on k-best Viterbi Decoupling Knowledge Distillation [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(3): 780-794. |
| [11] | SUN Xiujuan, SUN Fuzhen, LI Pengcheng, WANG Aofei, WANG Shaoqing. Fusion of Masked Autoencoder for Adaptive Augmentation Sequential Recommendation [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(12): 3324-3334. |
| [12] | JIANG Wentao, LIU Yuwei, ZHANG Shengchong. Image Data Augmentation Method for Random Channel Perturbation [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(11): 2980-2995. |
| [13] | TAN Lijun, HU Yanli, CAO Jianwei, TAN Zhen. Document-Level Event Detection Method Based on Information Aggregation and Data Augmentation [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(11): 3015-3026. |
| [14] | SANG Chenyang, MA Tinghuai, XIE Xintong, SUN Shengjie, HUANG Rui. Multi-stage Reasoning Method for Emotional Support Dialogue Generation Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(11): 2925-2939. |
| [15] | LIU Hebing, ZHANG Demeng, XIONG Shufeng, MA Xinming, XI Lei. Named Entity Recognition of Wheat Diseases and Pests Fusing ALBERT and Rules [J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(6): 1395-1404. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/