Journal of Frontiers of Computer Science and Technology ›› 2024, Vol. 18 ›› Issue (10): 2616-2629.DOI: 10.3778/j.issn.1673-9418.2407082

• Special Issue on Constructions and Applications of Large Language Models in Specific Domains • Previous Articles     Next Articles

Knowledge Augmentation on Traditional Chinese Medicine Language Model

JI Xiangyu, WANG Xin, ZHANG Heyi, MENG Zhaopeng, ZHANG Junhua, ZHUANG Pengwei, JIA Yongzhe, XU Dawei   

  1. 1. College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
    2. Tianjin University of Traditional Chinese Medicine, Tianjin 300193, China
    3. National Clinical Research Center for Chinese Medicine Acupuncture and Moxibustion, First Teaching Hospital of Tianjin University of Traditional Chinese Medicine, Tianjin 300193, China
    4. Tiandazhitu(Tianjin) Technology Co., Ltd., Tianjin 300192, China
  • Online:2024-10-01 Published:2024-09-29

面向中医药大模型的知识增强方法研究

吉祥宇,王鑫,张鹤译,孟昭鹏,张俊华,庄朋伟,贾勇哲,徐大为   

  1. 1. 天津大学 智能与计算学部,天津 300350
    2. 天津中医药大学,天津 300193
    3. 天津中医药大学第一附属医院 国家中医针灸临床医学中心,天津 300193
    4. 天大智图(天津)科技有限公司,天津 300192

Abstract: Recently, large language models (LLM) have made significant achievements in various fields. However, due to lack of specialized knowledge and the gap between modern medicine and traditional Chinese medicine (TCM), it is still a challenge to deploy LLM in TCM. Existing methods fail to maintain the structure of TCM prescription. To address the problems, a pattern of knowledge augmentation is proposed. The method includes model training, knowledge graph construction and knowledge augmentation. In the training phase, TCM language model is trained on TCM corpus, by a two-stage method combining pre-training and fine-tuning. In the knowledge graph construction phase, prescription knowledge graph is constructed from nearly 100000 preprocessed classical TCM prescriptions and those from ancient books. In the knowledge augmentation phase, enhanced by the above pattern, outputs are generated from computation of knowledge graph, according to the schema of knowledge graph from searching result, which preserves the structure of prescriptions. A set of evaluations specific to prescription optimizations is proposed, including objective and subjective indicators, to evaluate the performance of the model for the task. Experiment shows that the model improves greatly on both subjective and objective evaluations compared with baselines. BLEU-1 is increased by up to 0.09, while ROUGE-1 is increased by up to 0.21. Ablation study shows that, it is of vital importance for the model performance to be knowledge-augmented. BLEU-1 of augmentation-free model is decreased by about 37% compared with that of the augmented model.

Key words: large language model (LLM), traditional Chinese medicine, prescription optimization, retrieval augmented generation

摘要: 近年来,大语言模型(LLM)在各个领域取得了许多重大成果。由于缺乏专业知识,以及中医和现代医学的思想不同,大模型在中医药领域的应用仍是一项挑战。现有的知识增强方法难以保持中医方剂具有的自身结构性。为了解决以上问题,提出了一种新的知识增强方法。该方法由模型训练、图谱构建和知识增强三部分组成。在模型训练阶段,通过对基础大模型在中医药数据集上进行预训练和微调两阶段训练,得到中医药领域大模型。在图谱构建阶段,基于中医十万首经典方剂和古籍中的方剂,利用清洗后的数据集构建中医药图谱。在知识增强阶段,基于对知识图谱上信息的计算,利用检索图谱中的专业知识和图谱结构计算检索结果,中医药方剂中的结构特性得以保留。在中医药方剂配伍任务上,针对于任务特性提出了一组评价标准,包括主观指标和客观指标,用于评估模型在该任务上的表现。实验表明,该方法相对于基准测试模型,在主观指标和客观指标上均获得了较大提升,BLEU-1最高提升0.09,ROUGE-1最高提升0.21。消融实验表明,该方法对于模型在该任务上具有较大作用,未使用知识增强的模型BLEU-1相比于使用知识增强下降约37%。

关键词: 大语言模型(LLM), 中医药, 方剂优化, 检索增强生成