[1] 刘伟, 范旭. 基于社会主要矛盾判断的科技政策跃迁及创新[J]. 中国科技论坛, 2023(5): 29-36.
LIU W, FAN X. Transition and innovation of science and technology policy based on the judgment of social principal contradiction in China[J]. Forum on Science and Technology in China, 2023(5): 29-36.
[2] ZHU Y, YUAN H, WANG S, et al. Large language models for information retrieval: a survey[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2308.07107.
[3] LOUIS A, VAN DIJCK G, SPANAKIS G. Interpretable long-form legal question answering with retrieval-augmented large language models[C]//Proceedings of the 2024 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2024: 22266-22275.
[4] 胡志强, 李朋骏, 王金龙, 等. 基于ChatGPT增强和监督对比学习的政策工具归类研究[J]. 计算机工程与应用, 2024, 60(7): 292-305.
HU Z Q, LI P J, WANG J L, et al. Research on policy tools classification based on ChatGPT augmentation and supervised contrastive learning[J]. Computer Engineering and Applications, 2024, 60(7): 292-305.
[5] 李辉, 曾文, 吴晨生, 等. 中文科技政策数据分析方法研究——以新能源汽车领域科技政策为例[J]. 现代情报, 2018, 38(6): 68-72.
LI H, ZENG W, WU C S, et al. Data analysis method of Chinese science and technology policy—a case study of new energy automobile[J]. Journal of Modern Information, 2018, 38(6): 68-72.
[6] 李牧南, 王良, 赖华鹏. 基于深度学习的我国科技政策属性识别[J]. 科研管理, 2024, 45(2): 1-11.
LI M N, WANG L, LAI H P. Identification of China??s S&T policy properties based on deep learning[J]. Science Research Management, 2024, 45(2): 1-11.
[7] 郑新曼, 董瑜. 政策文本量化研究的综述与展望[J]. 现代情报, 2021, 41(2): 168-177.
ZHENG X M, DONG Y. Review on quantitative analysis of political texts[J]. Journal of Modern Information, 2021, 41(2): 168-177.
[8] LAI J, GAN W, WU J, et al. Large language models in law: a survey[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2312. 03718.
[9] HADI M U, QURESHI R, SHAH A, et al. A survey on large language models: applications, challenges, limitations, and practical usage[EB/OL]. [2024-04-23]. https://www.techrxiv.org/doi/full/10.36227/techrxiv.23589741.v1.
[10] WANG Y, KORDI Y, MISHRA S, et al. Self-Instruct: aligning language models with self-generated instructions[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 13484-13508.
[11] ZHANG X, YANG Q. Self-QA: unsupervised knowledge guided language model alignment[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2305.11952.
[12] SUN Y, WANG S, LI Y, et al. ERNIE 2.0: a continual pre-training framework for language understanding[C]//Proceedings of the 2020 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2020: 8968-8975.
[13] ZENG W, REN X, SU T, et al. PanGu-α: large-scale auto-regressive pretrained Chinese language models with auto-parallel computation[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2104.12369.
[14] ZHOU Z, SHI J X, SONG P X, et al. LawGPT: a Chinese legal knowledge-enhanced large language model[EB/OL]. [2024-07-06]. https://arxiv.org/abs/2406.04614.
[15] 张鹤译, 王鑫, 韩立帆, 等. 大语言模型融合知识图谱的问答系统研究[J]. 计算机科学与探索, 2023, 17(10): 2377-2388.
ZHANG H Y, WANG X, HAN L F, et al. Research on question answering system on joint of knowledge graph and large language models[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(10): 2377-2388.
[16] DING N, HU S, ZHAO W, et al. OpenPrompt: an open-source framework for prompt-learning[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2111.01998.
[17] HU E J, SHEN Y, WALLIS P, et al. LoRA: low-rank adaptation of large language models[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2106.09685.
[18] LIN C Y. ROUGE: a package for automatic evaluation of summaries[C]//Proceedings of the 2004 Workshop on Text Summarization Branches Out. Stroudsburg: ACL, 2004: 74-81.
[19] ZHANG T, KISHORE V, WU F, et al. BERTScore: evaluating text generation with BERT[EB/OL]. [2024-04-23]. https://arxiv.org/abs/1904.09675.
[20] DATTA A, FREDRIKSON M, LEINO K, et al. Exploring conceptual soundness with TruLens[C]//Advances in Neural Information Processing Systems 34, Dec 6-14, 2021: 302-307.
[21] ACHIAM J, ADLER S, AGARWAL S, et al. GPT-4 technical report[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2303. 08774.
[22] CUI J, LI Z, YAN Y, et al. ChatLaw: open-source legal large language model with integrated external knowledge bases[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2306.16092.
[23] BAI J, BAI S, CHU Y, et al. Qwen technical report[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2309.16609.
[24] TOPSAKAL O, AKINCI T C. Creating large language model applications utilizing LangChain: a primer on developing LLM apps fast[C]//Proceedings of the 2023 International Conference on Applied Engineering and Natural Sciences.Konya: All Sciences Academy, 2023: 1050-1056.
[25] LIU X, ZHENG Y, DU Z, et al. GPT understands, too[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2103.10385.
[26] LIU X, JI K, FU Y, et al. P-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2110.07602.
[27] OUYANG L, WU J, JIANG X, et al. Training language models to follow instructions with human feedback[C]//Advances in Neural Information Processing Systems 35, New Orleans, Nov 28-Dec 9, 2022: 27730-27744. |