Journal of Frontiers of Computer Science and Technology ›› 2025, Vol. 19 ›› Issue (5): 1141-1156.DOI: 10.3778/j.issn.1673-9418.2407021

• Frontiers·Surveys • Previous Articles     Next Articles

Review of Enhancement Research for Closed-Source Large Language Model

LIU Hualing,ZHANG Zilong,PENG Hongshuai   

  1. School of Statistics and Information, Shanghai University of International Business and Economics, Shanghai 201620, China
  • Online:2025-05-01 Published:2025-04-28

面向闭源大语言模型的增强研究综述

刘华玲,张子龙,彭宏帅   

  1. 上海对外经贸大学 统计与信息学院,上海 201620

Abstract: With the rapid development of large language models in the field of natural language processing, performance enhancement of closed-source large language models represented by the GPT family has become a challenge. Due to the inaccessibility of parameter weights inside the models, traditional training methods, such as fine-tuning techniques, are difficult to be applied to closed-source large language models, which makes it difficult for further optimization on these models. Meanwhile, closed-source large language models have been widely used in downstream real-world tasks, and thus it is important to investigate how to enhance the performance of closed-source large language models. This paper focuses on the enhancement of closed-source large language models, analyzes three techniques, namely prompt engineering, retrieval augmented generation, and agent, and further subdivides them according to the technical characteristics and modular architectures of the different methods. The core idea, main method and application effect of each technology are introduced in detail, and the superiority and limitation of different augmentation methods in terms of reasoning ability, generation credibility and task adaptability are studied. In addition, this paper also discusses the combined application of these three techniques, combining with specific cases to emphasize the great potential of the combined techniques in enhancing model performance. Finally, this paper summarizes the research status and problems of the existing techniques, and looks forward to the future development of enhancement techniques for closed-source large language models.

Key words: closed-source model, large language model, prompt engineering, retrieval augmented generation, agent

摘要: 随着大语言模型在自然语言处理领域的快速发展,以GPT系列为代表的闭源大语言模型的性能增强成为一个挑战。由于无法访问模型内部的参数权重,传统的训练方法,如微调技术,难以应用于闭源大语言模型,这使得在这些模型上进一步优化变得困难。同时,闭源大语言模型已经广泛应用于下游实际任务,因此研究如何增强闭源大语言模型的性能具有重要意义。聚焦于闭源大语言模型的增强研究,对提示工程(prompt engineering)、检索增强生成(retrieval augmented generation)、智能体(agent)三种技术进行了分析,并针对不同方法的技术特性和模块架构进行了进一步细分,详细介绍了每种技术的核心思想、主要方法及其应用效果,研究了不同增强方法在推理能力、生成可信度、任务适应性等方面的优越性和局限性。讨论了这三种技术的组合应用方法,结合具体案例,强调了组合技术在增强闭源大语言模型性能方面的巨大潜力。总结了现有技术的研究现状和存在的问题,对未来闭源大语言模型增强技术的发展进行了展望。

关键词: 闭源模型, 大语言模型, 提示工程, 检索增强生成, 智能体