[1] TB OpenAI. ChatGPT: optimizing language models for dialogue[R]. 2022.
[2] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: open and efficient foundation language models[EB/OL]. (2023-02-27) [2024-05-08]. https://arxiv.org/abs/2302.13971.
[3] YANG A Y, XIAO B, WANG B N, et al. Baichuan 2: open large-scale language models[EB/OL]. (2023-09-20). [2024-05-08]. https://arxiv.org/abs/2309.10305.
[4] 王子怡, 王鑫, 张岱岩, 等. 中医药网络药理学:《指南》引领下的新时代发展[J]. 中国中药杂志, 2022, 47(1): 7-17.
WANG Z Y, WANG X, ZHANG D Y, et al. Traditional Chinese medicine network pharmacology: development in new era under guidance of network pharmacology evaluation method guidance[J]. China Journal of Chinese Materia Medica, 2022, 47(1): 7-17.
[5] 任艳, 邓燕君, 马焓彬, 等. 网络药理学在中药领域的研究进展及面临的挑战[J]. 中草药, 2020, 51(18): 4789-4797.
REN Y, DENG Y J, MA H B, et al. Research progress and challenges of network pharmacology in field of traditional Chinese medicine[J]. Chinese Traditional and Herbal Drugs, 2020, 51(18): 4789-4797.
[6] LE SCAO T, FAN A, AKIKI C, et al. BLOOM: a 176B-parameter open-access multilingual language model[EB/OL]. (2023-01-27) [2024-05-08]. https://arxiv.org/abs/2211.05100.
[7] TAORI R, GULRAJANI I, ZHANG T Y, et al. Alpaca: a strong, replicable instruction-following model[EB/OL]. (2023-03-13)[2024-05-08]. https://crfm.stanford.edu/2023/03/13/alpaca.html.
[8] CHIANG W L, LI Z H, LIN Z, et al. Vicuna: an open-source chatbot impressing GPT-4 with 90% ChatGPT quality[EB/OL]. (2023-03-30)[2024-05-08]. https://lmsys.org/blog/2023-03-30-vicuna.
[9] CUI Y M, YANG Z Q, YAO X. Efficient and effective text encoding for Chinese LLaMA and Alpaca[EB/OL]. (2024-02-23)[2024-05-08]. https://arxiv.org/abs/2304.08177.
[10] GAN R Y, WU Z W, SUN R L, et al. Ziya2: data-centric learning is all LLMs need[EB/OL]. (2024-04-04)[2024-05-08]. https://arxiv.org/abs/2311.03301.
[11] HAN T Y, ADAMS L C, PAPAIOANNOU J M, et al. Med-Alpaca: an open-source collection of medical conversational AI models and training data[EB/OL]. (2023-10-04)[2024-05-08]. https://arxiv.org/abs/2304.08247.
[12] SINGHAL K, TU T, GOTTWEIS J, et al. Towards expert-level medical question answering with large language models[EB/OL]. (2023-05-16) [2024-05-08]. https://arxiv.org/abs/2305.09617.
[13] WANG H C, LIU C, XI N W, et al. HuaTuo: tuning LLaMA model with Chinese medical knowledge[EB/OL]. (2023-04-14) [2024-05-08]. https://arxiv.org/abs/2304.06975.
[14] RAMAMURTHY R, AMMANABROLU P, BRANTLEY K, et al. Is reinforcement learning (not) for natural language processing: benchmarks, baselines, and building blocks for natural language policy optimization[C]//Proceedings of the 11th International Conference on Learning Representations, Kigali, May 1-5, 2023: 1-61.
[15] LIU J L, WANG Z M, YE Q C, et al. Qilin-Med-VL: towards Chinese large vision-language model for general healthcare[EB/OL]. (2023-11-01) [2024-05-08]. https://arxiv.org/abs/2310.17956.
[16] 崔唐明, 孙美玲, 孙华君, 等. 基于PICO模型的中医药循证指南知识图谱构建与智能问答系统研究[J]. 中国数字医学, 2024, 19(5): 20-27.
CUI T M, SUN M L, SUN H J, et al. Construction of a knowledge graph and intelligent Q&A system for evidence-based TCM guidelines based on the PICO model[J]. China Digital Medicine, 2024, 19(5): 20-27.
[17] 王嘉欣, 姚鉴玲, 马嘉慕, 等. 基于AHP-SOM聚类-TOPSIS和中医传承辅助平台研究中医药治疗围绝经期抑郁症组方规律[J]. 中草药, 2022, 53(22): 7153-7163.
WANG J X, YAO J L, MA J M, et al. Analysis on prescription rules of traditional Chinese medicine in treatment of perimenopausal depression based on AHP-SOM-TOPSIS algorithm and traditional Chinese medicine inheritance platform system[J]. Chinese Traditional and Herbal Drugs, 2022, 53(22): 7153-7163.
[18] 李德琳, 魏本征, 张诏, 等. 基于FP-growth算法的中医抗病毒方剂配伍规律探索[J]. 中华中医药学刊, 2018, 36(3): 663-668.
LI D L, WEI B Z, ZHANG Z, et al. FP-growth algorithm-based exploratory study on compatibility law of Chinese traditional anti-virus medicine prescription[J]. Chinese Archives of Traditional Chinese Medicine, 2018, 36(3): 663-668.
[19] 单雨濛, 张科, 胡文军, 等. 基于超图的中药方剂超网络中药材群组信息挖掘[J]. 中草药, 2024, 55(11): 3816-3824.
SHAN Y M, ZHANG K, HU W J, et al. Mining of group information of medicinal materials in traditional Chinese medicine prescriptions hypernetwork based on hypergraph[J]. Chinese Traditional and Herbal Drugs, 2024, 55(11): 3816-3824.
[20] 2020MEAI. TCMLLM[EB/OL]. [2024-05-08]. https://github.com/2020MEAI/TCMLLM.
[21] LEWIS P, PEREZ E, PIKTUS A, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks[C]//Advances in Neural Information Processing Systems 33, Dec 6-12, 2020. Cambridge: MIT Press, 2020: 9459-9474.
[22] GUU K, LEE K, TUNG Z, et al. REALM: retrieval-augmented language model pre-training[C]//Proceedings of the 37th International Conference on Machine Learning, Jul 13-18, 2020. New York: ACM, 2020: 3929-3938.
[23] SARTHI P, ABDULLAH S, TULI A, et al. RAPTOR: recursive abstractive processing for tree-organized retrieval[C]// Proceedings of the 12th International Conference on Learning Representations, Vienna, May 7-11, 2024: 1-23.
[24] EDGE D, TRINH H, CHENG N, et al. From local to global: a graph RAG approach to query-focused summarization[EB/OL]. (2024-04-24]) [2024-05-08]. https://arxiv.org/abs/2404. 16130.
[25] SHUMAILOV I, SHUMAYLOV Z, ZHAO Y R, et al. The curse of recursion: training on generated data makes models forget[EB/OL]. (2024-04-14) [2024-05-08]. https://arxiv.org/abs/2305.17493.
[26] GUDIBANDE A, WALLACE E, SNELL C, et al. The false promise of imitating proprietary LLMs[EB/OL]. (2023-05-25) [2024-05-08]. http://arxiv.org/abs/2305.15717.
[27] GAO Y F, XIONG Y, GAO X Y, et al. Retrieval-augmented generation for large language models: a survey[EB/OL]. (2024-03-27) [2024-05-08]. https://arxiv.org/abs/2312.10997.
[28] ZHANG H B, CHEN J Y, JIANG F, et al. HuatuoGPT, towards taming language model to be a doctor[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, Dec 6-10, 2023. Stroudsburg: ACL, 2023: 10859-10885.
[29] YANG S H, ZHAO H J, ZHU S B, et al. Zhongjing: enhancing the Chinese medical capabilities of large language model through expert feedback and real-world multi-turn dialogue[C]//Proceedings of the 38th AAAI Conference on Artificial Intelligence, Vancouver, Feb 20-27, 2024. Menlo Park: AAAI, 2024: 19368-19376.
[30] ES S, JAMES J, ESPINOSA-ANKE L, et al. RAGAs: automated evaluation of retrieval augmented generation[C]// Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, Dubrovnik, May 2-6, 2023. Stroudsburg: ACL, 2023: 150-158. |