[1] 陈子睿, 王鑫, 王林, 等. 开放领域知识图谱问答研究综述[J]. 计算机科学与探索, 2021, 15(10): 1843-1869.
CHEN Z R, WANG X, WANG L, et al. Survey of open-do-main knowledge graph question answering[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(10): 1843-1869.
[2] 萨日娜, 李艳玲, 林民. 知识图谱推理问答研究综述[J]. 计算机科学与探索, 2022, 16(8): 1727-1741.
SARINA, LI Y L, LIN M. Survey of question answering based on knowledge graph reasoning[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(8): 1727-1741.
[3] GUU K, LEE K, TUNG Z, et al. Retrieval augmented lan-guage model pre-training[C]//Proceedings of the 37th Inter-national Conference on Machine Learning, Vienna, Jul 12-18, 2020. New York: PMLR, 2020: 3929-3938.
[4] CHOWDHERY A, NARANG S, DEVLIN J, et al. PaLM: scaling language modeling with pathways[J]. arXiv:2204.02311, 2022.
[5] WEI J, TAY Y, BOMMASANI R, et al. Emergent abilities of large language models[J]. arXiv:2206.07682, 2022.
[6] OUYANG L, WU J, JIANG X, et al. Training language mo-dels to follow instructions with human feedback[C]//Ad-vances in Neural Information Processing Systems 35, 2022: 27730-27744.
[7] OPENAI. GPT-4 technical report[R]. arXiv:2303.08774, 2023.
[8] WANG Y, KORDI Y, MISHRA S, et al. Self-Instruct: alig-ning language model with self generated instructions[J]. arXiv:2212.10560, 2022.
[9] MAYNEZ J, NARAYAN S, BOHNET B, et al. On faithful-ness and factuality in abstractive summarization[J]. arXiv:2005.00661, 2020.
[10] TONEVA M, SORDONI A, COMBES R T, et al. An empi-rical study of example forgetting during deep neural net-work learning[J]. arXiv:1812.05159, 2018.
[11] DU Z, QIAN Y, LIU X, et al. GLM: general language mo-del pretraining with autoregressive blank infilling[C]//Pro-ceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dub-lin, May 22-27, 2022. Stroudsburg: ACL, 2022: 320-335.
[12] LIU X, ZHENG Y, DU Z, et al. GPT understands, too[J]. arXiv:2103.10385, 2021.
[13] LIU X, JI K, FU Y, et al. P-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks[J]. arXiv:2110.07602, 2021.
[14] WU C, ZHANG X, ZHANG Y, et al. PMC-LLaMA: further finetuning LLaMA on medical papers[J]. arXiv:2304.14454, 2023.
[15] SINGHAL K, AZIZI S, TU T, et al. Large language models encode clinical knowledge[J]. arXiv:2212.13138, 2022.
[16] YUNXIANG L, ZIHAN L, KAI Z, et al. ChatDoctor: a me-dical chat model fine-tuned on LLaMA model using medical domain knowledge[J]. arXiv:2303.14070, 2023.
[17] WANG H, LIU C, XI N, et al. HuaTuo: tuning LLaMA model with Chinese medical knowledge[J]. arXiv:2304.06975, 2023.
[18] XIONG H, WANG S, ZHU Y, et al. DoctorGLM: fine-tuning your Chinese doctor is not a herculean task[J]. arXiv:2304.01097, 2023.
[19] ZENG G, YANG W, JU Z, et al. MedDialog: large-scale me-dical dialogue datasets[C]//Proceedings of the 2020 Con-ference on Empirical Methods in Natural Language Pro-cessing, Nov 16-20, 2020. Stroudsburg: ACL, 2020: 9241-9250.
[20] ZHANG N, JIA Q, YIN K, et al. Conceptualized represen-tation learning for Chinese biomedical text mining[J]. arXiv:2008.10813, 2020.
[21] BASALDELLA M, LIU F, SHAREGHI E, et al. COMETA: a corpus for medical entity linking in the social media[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Nov 16-20, 2020. Stroud-sburg: ACL, 2020: 3122-3137.
[22] CUI Y, LIU T, CHE W, et al. A span-extraction dataset for Chinese machine reading comprehension[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Con-ference on Natural Language Processing, Hong Kong, China, Nov 3-7, 2019. Stroudsburg: ACL, 2019: 5883-5889.
[23] XIONG C, SU M. IARNN-based semantic-containing double-level embedding Bi-LSTM for question-and-answer mat-ching[J]. Computational Intelligence and Neuroscience, 2019. DOI:10.1155/2019/6074840.
[24] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understan-ding[J]. arXiv:1810.04805, 2018. |