
Journal of Frontiers of Computer Science and Technology ›› 2025, Vol. 19 ›› Issue (5): 1141-1156.DOI: 10.3778/j.issn.1673-9418.2407021
• Frontiers·Surveys • Previous Articles Next Articles
LIU Hualing,ZHANG Zilong,PENG Hongshuai
Online:2025-05-01
Published:2025-04-28
刘华玲,张子龙,彭宏帅
LIU Hualing, ZHANG Zilong, PENG Hongshuai. Review of Enhancement Research for Closed-Source Large Language Model[J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(5): 1141-1156.
刘华玲, 张子龙, 彭宏帅. 面向闭源大语言模型的增强研究综述[J]. 计算机科学与探索, 2025, 19(5): 1141-1156.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2407021
| [1] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Advances in Neural Information Processing Systems 33, 2020: 1877-1901. [2] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, 2017: 5998-6008. [3] GOYAL T, LI J J, DURRETT G. News summarization and evaluation in the era of GPT-3[EB/OL]. [2024-04-19]. https://arxiv.org/abs/2209.12356. [4] LE SCAO T, FAN A, AKIKI C, et al. BLOOM: a 176B-parameter open-access multilingual language model[EB/OL]. [2024-04-19]. https://arxiv.org/abs/2211.05100. [5] ACHIAM J, ADLER S, AGARWAL S, et al. GPT-4 technical report[EB/OL]. [2024-04-19]. https://arxiv.org/abs/2303.08774. [6] SUN Y, WANG S, FENG S, et al. Ernie 3.0: large-scale knowledge enhanced pre-training for language understanding and generation[EB/OL]. [2024-04-19]. https://arxiv.org/ abs/2107.02137. [7] COLIN R, NOAM S, ADAM R, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1): 5485-5551. [8] HUANG L, YU W J, MA W T, et al. A survey on hallucination in large language models: principles, taxonomy, challenges, and open questions[EB/OL]. [2024-04-19]. https://arxiv.org/abs/2311.05232. [9] SPIRLING A. Why open-source generative AI models are an ethical way forward for science[J]. Nature, 2023, 616(7957): 413. [10] WEI J, BOSMA M, ZHAO V Y, et al. Finetuned language models are zero-shot learners[EB/OL]. [2024-04-19]. https://arxiv.org/abs/2109.01652. [11] KAPLAN J, MCCANDLISH S, HENIGHAN T, et al. Scaling laws for neural language models[EB/OL]. [2024-04-19]. https://arxiv.org/abs/2001.08361. [12] MIN S, LYU X X, HOLTZMAN A, et al. Rethinking the role of demonstrations: what makes in-context learning work?[EB/OL]. [2024-04-19]. https://arxiv.org/abs/2202.12837. [13] ZHAO T, WALLACE E, FENG S, et al. Calibrate before use: improving few-shot performance of language models[C]//Proceedings of the 38th International Conference on Machine Learning, 2021: 12697-12706. [14] WEBSON A, PAVLICK E. Do prompt-based models really understand the meaning of their prompts? [EB/OL]. [2024-04-19]. https://arxiv.org/abs/2109.01247. [15] WEI J, WANG X Z, SCHUURMANS D, et al. Chain-of-thought prompting elicits reasoning in large language models[EB/OL]. [2024-04-19]. https://arxiv.org/abs/2201.11903. [16] KOJIMA T, GU S S, REID M, et al. Large language models are zero-shot reasoners[C]//Advances in Neural Information Processing Systems 35, 2022: 22199-22213. [17] WANG X Z, WEI J, SCHUURMANS D, et al. Self-consistency improves chain of thought reasoning in language models[EB/OL]. [2024-05-16]. https://arxiv.org/abs/2203. 11171. [18] ZHOU D, SCH?RLI N, HOU L, et al. Least-to-most prompting enables complex reasoning in large language models[EB/OL]. [2024-05-16]. https://arxiv.org/abs/2205.10625. [19] ZHANG Z S, ZHANG A, LI M, et al. Automatic chain of thought prompting in large language models[EB/OL]. [2024-05-16]. https://arxiv.org/abs/2210.03493. [20] LIU H M, TENG Z Y, CUI L Y, et al. LogiCoT: logical chain-of-thought instruction tuning[C]//Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg: ACL, 2023. [21] HU H, LU H, ZHANG H, et al. Chain-of-symbol prompting for spatial reasoning in large language models[C]//Proceedings of the 1st Conference on Language Modeling, 2024. [22] ZHOU Y, GENG X, SHEN T, et al. Thread of thought unraveling chaotic contexts[EB/OL]. [2024-04-19]. https://arxiv. org/abs/2311.08734. [23] LU X C, LIU Z M, GUO S, et al. DRPT: disentangled and recurrent prompt tuning for compositional zero-shot learning[EB/OL]. [2024-05-16]. https://arxiv.org/abs/2305.01239. [24] ZHAO X Y, MA C Q. Prompt recursive search: a living framework with adaptive growth in LLM auto-prompting[EB/OL]. [2024-09-23]. https://arxiv.org/abs/2408.01423. [25] DENG Y H, ZHANG W T, CHEN Z X, et al. Rephrase and respond: let large language models ask better questions for themselves[EB/OL]. [2024-05-16]. https://arxiv. org/abs/2311.04205. [26] YAO S, YU D, ZHAO J, et al. Tree of thoughts: deliberate problem solving with large language models[C]//Advances in Neural Information Processing Systems 36, 2023. [27] LIU P F, YUAN W Z, FU J L, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing[J]. ACM Computing Surveys, 2023, 55(9): 1-35. [28] LONG J Y. Large language model guided tree-of-thought[EB/OL]. [2024-05-16]. https://arxiv.org/abs/2305.08291. [29] CHIA Y K, CHEN G Z, TUAN L A, et al. Contrastive chain-of-thought prompting[EB/OL]. [2024-05-16]. https://arxiv.org/abs/2311.09277. [30] ZHENG W, SHARAN S P, JAISWAL A K, et al. Outline, then details: syntactically guided coarse-to-fine code generation[C]//Proceedings of the 40th International Conference on Machine Learning, 2023: 42403-42419. [31] HUANG Q, ZOU Z, XING Z, et al. AI chain on large language model for unsupervised control flow graph generation for statically-typed partial code[EB/OL]. [2024-05-16]. https://arxiv.org/abs/2306.00757. [32] BESTA M, BLACH N, KUBICEK A, et al. Graph of thoughts: solving elaborate problems with large language models[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38(16): 17682-17690. [33] WEI A, HAGHTALAB N, STEINHARDT J. Jailbroken: how does LLM safety training fail?[EB/OL]. [2024-05-16]. https://arxiv.org/abs/2307.02483. [34] LIU Y, DENG G L, XU Z Z, et al. Jailbreaking ChatGPT via prompt engineering: an empirical study[EB/OL]. [2024-05-16]. https://arxiv.org/abs/2305.13860. [35] LEWIS P, PEREZ E, PIKTUS A, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks[C]//Advances in Neural Information Processing Systems 33, 2020: 9459-9474. [36] CHASE H. LangChain[EB/OL]. [2024-12-11]. https://github.com/langchain-ai/langchain. [37] KHATTAB O, SANTHANAM K, LI X L, et al. Demonstrate-search-predict: composing retrieval and language models for knowledge-intensive NLP[EB/OL]. [2024-05-24]. https://arxiv.org/abs/2212.14024. [38] PIETSCH M, M?LLER T, KOSTIC B, et al. Haystack: the end-to-end NLP framework for pragmatic builders[EB/OL]. [2024-12-11]. https://github.com/deepset-ai/haystack. [39] YANG Z L, QI P, ZHANG S Z, et al. HotpotQA: a dataset for diverse, explainable multi-hop question answering[EB/OL]. [2024-05-24]. https://arxiv.org/abs/1809.09600. [40] ZHANG X, WU J, HE Z Y, et al. Medical exam question answering with large-scale reading comprehension[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2018, 32(1). [41] ZHA L Y, ZHOU J L, LI L Y, et al. TableGPT: towards unifying tables, nature language and commands into one GPT[EB/OL]. [2024-05-24]. https://arxiv.org/abs/2307.08674. [42] HE X X, TIAN Y J, SUN Y F, et al. G-retriever: retrieval-augmented generation for textual graph understanding and question answering[EB/OL]. [2024-05-24]. https://arxiv. org/abs/2402.07630. [43] WANG X T, YANG Q W, QIU Y T, et al. KnowledGPT: enhancing large language models with retrieval and storage access on knowledge bases[EB/OL]. [2024-05-24]. https://arxiv.org/abs/2308.11761. [44] ZHONG Z J, LIU H W, CUI X Y, et al. Mix-of-granularity: optimize the chunking granularity for retrieval-augmented generation[EB/OL]. [2024-08-20]. https://arxiv.org/abs/2406. 00456. [45] CHEN T, WANG H W, CHEN S H, et al. Dense X retrieval: what retrieval granularity should we use?[EB/OL]. [2024-05-24]. https://arxiv.org/abs/2312.06648. [46] LI D, LI Z, YANG Y, et al. Knowledge graph-enhanced large language model for domain-specific question answering systems[EB/OL]. [2024-10-22]. https://doi.org/10.36227/techrxiv.172963127.77889256/v1. [47] LIU J. LlamaIndex[EB/OL]. [2024-05-24]. https://github. com/run-llama/llama_index. [48] DAI Z Y, ZHAO V Y, MA J, et al. Promptagator: few-shot dense retrieval from 8 examples[EB/OL]. [2024-05-24]. https://arxiv.org/abs/2209.11755. [49] ZHANG P T, XIAO S T, LIU Z, et al. Retrieve anything to augment large language models[EB/OL]. [2024-05-24]. https://arxiv.org/abs/2310.07554. [50] WANG L, YANG N, WEI F R. Query2doc: query expansion with large language models[EB/OL]. [2024-05-24]. https://arxiv.org/abs/2303.07678. [51] SHAO Z H, GONG Y Y, SHEN Y L, et al. Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy[C]//Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg: ACL, 2023: 9248-9274. [52] GAO L Y, MA X G, LIN J, et al. Precise zero-shot dense retrieval without relevance labels[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 1762-1777. [53] MA X B, GONG Y Y, HE P C, et al. Query rewriting in retrieval-augmented large language models[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 5303-5315. [54] LIU J N, JIN J J, WANG Z H, et al. RETA-LLM: a retrieval-augmented large language model toolkit[EB/OL]. [2024-05-24]. https://arxiv.org/abs/2306.05212. [55] YU Z C, XIONG C Y, YU S, et al. Augmentation-adapted retriever improves generalization of language models as generic plug-in[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 2421-2436. [56] SHI W J, MIN S, YASUNAGA M, et al. REPLUG: retrieval-augmented black-box language models[EB/OL]. [2024-05-24]. https://arxiv.org/abs/2301.12652. [57] YANG H Y, LI Z T, ZHANG Y, et al. PRCA: fitting black-box large language models for retrieval question answering via pluggable reward-driven contextual adapter[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023. [58] XU F Y, SHI W J, CHOI E. RECOMP: improving retrieval-augmented LMs with compression and selective augmentation[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2310. 04408. [59] LUO Z Y, XU C, ZHAO P, et al. Augmented large language models with parametric knowledge guiding[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2305.04757. [60] GAO Y F, XIONG Y, GAO X Y, et al. Retrieval-augmented generation for large language models: a survey[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2312.10997. [61] XU P, PING W, WU X C, et al. Retrieval meets long context large language models[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2310.03025. [62] ZHU Y, YUAN H, WANG S, et al. Large language models for information retrieval: a survey[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2308.07107. [63] MA Y B, CAO Y X, HONG Y, et al. Large language model is not a good few-shot information extractor, but a good reranker for hard samples![EB/OL]. [2024-06-20]. https://arxiv.org/abs/2303.08559. [64] ZHUANG S Y, LIU B, KOOPMAN B, et al. Open-source large language models are strong zero-shot query likelihood models for document ranking[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2310.13243. [65] CHENG X, LUO D, CHEN X, et al. Lift yourself up: retrieval-augmented text generation with self-memory[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2305.02437v3. [66] ZOU A, WANG Z F, CARLINI N, et al. Universal and transferable adversarial attacks on aligned language models[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2307.15043. [67] KANG M, KWAK J M, BAEK J, et al. Knowledge graph-augmented language models for knowledge-grounded dialogue generation[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2305.18846. [68] LI X Z, LIU Z H, XIONG C Y, et al. Structure-aware language model pretraining improves dense retrieval on structured data[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2305. 19912. [69] ZHAO P H, ZHANG H L, YU Q H, et al. Retrieval-augmented generation for AI-generated content: a survey[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2402.19473. [70] HAN C, WANG Q F, PENG H, et al. LM-infinite: zero-shot extreme length generalization for large language models[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2308.16137. [71] CUCONASU F, TRAPPOLINI G, SICILIANO F, et al. The power of noise: redefining retrieval for RAG systems[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2401.14887. [72] DU Z X, QIAN Y J, LIU X, et al. GLM: general language model pretraining with autoregressive blank infilling[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2103.10360. [73] MADAAN A, TANDON N, GUPTA P, et al. Self-refine: iterative refinement with self-feedback[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2303.17651. [74] SHINN N, CASSANO F, GOPINATH A, et al. Reflexion: language agents with verbal reinforcement learning[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2303.11366v4. [75] ZHANG W Q, TANG K, WU H, et al. Agent-Pro: learning to evolve via policy-level reflection and optimization[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2402.17574. [76] TOY J, MACADAM J, TABOR P. Metacognition is all you need? Using introspection in generative agents to improve goal-directed behavior[EB/OL]. [2024-06-20]. https://arxiv. org/abs/2401.10910. [77] ZHOU P, MADAAN A, POTHARAJU S P, et al. How FaR are large language models from agents with theory-of-mind?[EB/OL]. [2024-06-20]. https://arxiv.org/abs/2310. 03051. [78] ZHAO A, HUANG D, XU Q, et al. ExpeL: LLM agents are experiential learners[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38(17): 19632-19642. [79] LIU B, JIANG Y Q, ZHANG X H, et al. LLM+P: empowering large language models with optimal planning proficiency[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2304. 11477. [80] LI Y C, WEN H, WANG W J, et al. Personal LLM agents: insights and survey about the capability, efficiency and security[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2401.05459. [81] MAI J J, CHEN J, LI B, et al. LLM as a robotic brain: unifying egocentric memory and control[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2304.09349. [82] SONG C H, SADLER B M, WU J M, et al. LLM-planner: few-shot grounded planning for embodied agents with large language models[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 2986-2997. [83] VALMEEKAM K, MARQUEZ M, SREEDHARAN S, et al. On the planning abilities of large language models - a critical investigation[C]//Advances in Neural Information Processing Systems 36, 2023: 75993-76005. [84] HONG S R, ZHUGE M C, CHEN J Q, et al. MetaGPT: meta programming for a multi-agent collaborative framework[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2308.00352. [85] ZHANG H X, DU W H, SHAN J M, et al. Building cooperative embodied agents modularly with large language models[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2307. 02485. [86] TALEBIRAD Y, NADIRI A. Multi-agent collaboration: harnessing the power of intelligent LLM agents[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2306.03314. [87] ABDELNABI S, GOMAA A, SIVAPRASAD S, et al. LLM-deliberation: evaluating LLMs with interactive multi-agent negotiation games[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2309.17234. [88] AGASHE S, FAN Y, REYNA A, et al. LLM-coordination: evaluating and analyzing multi-agent coordination abilities in large language models[EB/OL]. [2024-06-27]. https:// arxiv.org/abs/2310.03903. [89] LIANG T, HE Z W, JIAO W X, et al. Encouraging divergent thinking in large language models through multi-agent debate[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2305.19118. [90] ZHANG C Y, YANG K J, HU S Y, et al. ProAgent: building proactive cooperative agents with large language models[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38(16): 17591-17599. [91] OZCAN F, SUBRAHMANIAN V S, DIX J. Improving performance of heavily loaded agents[EB/OL]. [2024-06-27]. https://arxiv.org/abs/cs/0012004. [92] TIAN Y, YANG X, ZHANG J Y, et al. Evil geniuses: delving into the safety of LLM-based agents[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2311.11855. [93] KANG B, KIM J, YUN T R, et al. Prompt-RAG: pioneering vector embedding-free retrieval-augmented generation in niche domains, exemplified by Korean medicine[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2401.11246. [94] TAN H Z, LUO Q, JIANG L, et al. Prompt-based code completion via multi-retrieval augmented generation[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2405.07530. [95] HU Z B, WANG C, SHU Y F, et al. Prompt perturbation in retrieval-augmented generation based large language models[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2024: 1119-1130. [96] JUNPRUNG E. Exploring the intersection of large language models and agent-based modeling via prompt engineering[EB/OL]. [2024-06-27]. https://arxiv.org/abs/2308. 07411. [97] LIU Z H, ZENG R N, WANG D X, et al. Agents4PLC: automating closed-loop PLC code generation and verification in industrial control systems using LLM-based agents[EB/OL]. [2024-12-11]. https://arxiv.org/abs/2410.14209. [98] HUANG X W, RUAN W J, HUANG W, et al. A survey of safety and trustworthiness of large language models through the lens of verification and validation[J]. Artificial Intelligence Review, 2024, 57(7): 175. [99] GALLEGOS I O, ROSSI R A, BARROW J, et al. Bias and fairness in large language models: a survey[J]. Computational Linguistics, 2024, 50(3): 1097-1179. [100] TANNERU S H, AGARWAL C, LAKKARAJU H. Quantifying uncertainty in natural language explanations of large language models[C]//Proceedings of the 2024 International Conference on Artificial Intelligence and Statistics, 2024: 1072-1080. |
| [1] | ZHOU Hanwen, DENG Zhaohong, ZHANG Wei. Global and Cross-Semantic Aggregation for Multi-level Enzyme Function Prediction [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1588-1597. |
| [2] | XU Delong, LIN Min, WANG Yurong, ZHANG Shujun. Survey of NLP Data Augmentation Methods Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1395-1413. |
| [3] | ZHANG Xin, SUN Jingchao. Review of False Information Detection Frameworks Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1414-1436. |
| [4] | HE Jing, SHEN Yang, XIE Runfeng. Research on Categorical Recognition and Optimization of Hallucination Phenomenon in Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(5): 1295-1301. |
| [5] | LI Juhao, SHI Lei, DING Meng, LEI Yongsheng, ZHAO Dongyue, CHEN Long. Social Media Text Stance Detection Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(5): 1302-1312. |
| [6] | WANG Xiaoyu, LI Xin, HU Mianning, XUE Di. CIL-LLM: Incremental Learning Framework Based on Large Language Models for Category Classification [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(2): 374-384. |
| [7] | FENG Tuoyu, WANG Gangliang, QIAO Zijian, LI Weiping, ZHANG Yusong, GUO Qinglang. SbSER: Step-by-Step Enhanced Reasoning Framework for Large Language Model with External Subgraph Generation [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(2): 367-373. |
| [8] | XU Fengru, LI Bohan, XU Shuai. Research Progress on Sequence Recommendation Based on Deep Learning and Large Language Model [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(2): 344-366. |
| [9] | CHANG Baofa, CHE Chao, LIANG Yan. Research on Recommendation Model Based on Multi-round Dialogue of Large Language Model [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(2): 385-395. |
| [10] | YUE Qi, ZHANG Chenkang. Survey on Applications of AIGC in Multimodal Scenarios [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 79-96. |
| [11] | LI Boxin. Method of Retrieval-Augmented Large Language Models with Stable Outputs for Private Question-Answering Systems [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 132-140. |
| [12] | WANG Yong, QIN Jiajun, HUANG Yourui, DENG Jiangzhou. Design of University Research Management Question Answering System Integrating Knowledge Graph and Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 107-117. |
| [13] | YU Fengrui, DU Yanhui. Research on Generative Techniques for Identifying and Extracting Tactics, Techniques and Procedures [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 118-131. |
| [14] | XU Lei, HU Yahao, CHEN Man, CHEN Jun, PAN Zhisong. Hate Speech Detection Method Integrating Prefix Tuning and Prompt Learning [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 97-106. |
| [15] | XIANG Xiaowei, SHEN Yanguang, HU Minghao, YAN Tianwei, LUO Wei, LUO Zhunchen. Research on Science and Technology Policy and Regulation Q&A System Driven by Large Models [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(9): 2349-2360. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/