[1] 卢经纬, 郭超, 戴星原, 等. 问答ChatGPT之后:超大预训练模型的机遇和挑战[J]. 自动化学报, 2023, 49(4): 705-717.
LU J W, GUO C, DAI X Y, et al. The ChatGPT after: opportunities and challenges of very large scale pre-trained models[J]. Acta Automatica Sinica, 2023, 49(4): 705-717.
[2] 桑基韬, 于剑. 从ChatGPT看AI未来趋势和挑战[J]. 计算机研究与发展, 2023, 60(6): 1191-1201.
SANG J T, YU J. ChatGPT: a glimpse into AI??s future[J]. Journal of Computer Research and Development, 2023, 60(6): 1191-1201.
[3] 陈舒梦. 大语言模型在外语教学中的应用研究[J]. 长春师范大学学报, 2023, 42(11): 170-173.
CHEN S M. Research on the application of large models in foreign language teaching[J]. Journal of Changchun Normal University, 2023, 42(11): 170-173.
[4] 杨涛, 王欣宇, 朱垚, 等. 大语言模型驱动的中医智能诊疗研究思路与方法[J]. 南京中医药大学学报, 2023, 39(10): 967-971.
YANG T, WANG X Y, ZHU Y, et al. Research ideas and methods of intelligent diagnosis and treatment of traditional Chinese medicine driven by large language model[J]. Journal of Nanjing University of Traditional Chinese Medicine, 2023, 39(10): 967-971.
[5] 杨波, 孙晓虎, 党佳怡, 等. 面向医疗问答系统的大语言模型命名实体识别方法[J]. 计算机科学与探索, 2023, 17(10): 2389-2402.
YANG B, SUN X H, DANG J Y, et al. Named entity recognition method of large language model for medical question answering system[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(10): 2389-2402.
[6] 王昀, 胡珉, 塔娜, 等. 大语言模型及其在政务领域的应用 [J]. 清华大学学报(自然科学版), 2024, 64(4): 649-658.
WANG Y, HU M, TA N, et al. Large language models and their application in government affairs[J]. Journal of Tsinghua University(Science and Technology), 2024, 64(4): 649-658.
[7] 徐月梅, 胡玲, 赵佳艺, 等. 大语言模型的技术应用前景与风险挑战[J]. 计算机应用, 2024, 44(6): 1655-1662.
XU Y M, HU L, ZHAO J Y, et al. Technology application prospects and risk challenges of large language model[J]. Journal of Computer Applications, 2024, 44(6): 1655-1662.
[8] 祁鹏年, 廖雨伦, 覃飙. 基于深度学习的中文命名实体识别研究综述[J]. 小型微型计算机系统, 2023, 44(9): 1857-1868.
QI P N, LIAO Y L, QIN B. Survey on deep learning for Chinese named entity recognition[J]. Journal of Chinese Computer Systems, 2023, 44(9): 1857-1868.
[9] XUE M G, YU B, LIU T, et al. Porous lattice transformer encoder for Chinese NER[C]//Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Dec 8-13, 2020: 3831-3841.
[10] CAO P, CHEN Y, LIU K, et al. Adversarial transfer learning for Chinese named entity recognition with self-attention mechanism[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 182-192.
[11] LI J, FEI H, LIU J, et al. Unified named entity recognition as word-word relation classification[C]//Proceedings of the 2022 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2022: 10965-10973.
[12] QI P, QIN B. SSMI: semantic similarity and mutual information maximization based enhancement for Chinese NER[C]//Proceedings of the 2023 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2023: 13474-13482.
[13] FRITZLER A, LOGACHEVA V, KRETOV M. Few-shot classification in named entity recognition task[C]//Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. New York: ACM, 2019: 993-1000.
[14] WANG P, XU R, LIU T, et al. An enhanced span-based decomposition method for few-shot sequence labeling[EB/OL]. [2024-01-14]. https://arxiv.org/abs/2109.13023.
[15] MA R, ZHOU X, GUI T, et al. Template-free prompt tuning for few-shot NER[EB/OL]. [2024-01-14]. https://arxiv.org/abs/2109.13532.
[16] 刘蓓, 许卓明, 陶皖, 等. 少样本关系抽取研究综述[J]. 计算机工程与应用, 2023, 59(15): 27-37.
LIU B, XU Z M, TAO W, et al. Survey on few-shot relation extraction[J]. Computer Engineering and Applications, 2023, 59(15): 27-37.
[17] NAYAK T, NG H T. Effective modeling of encoder-decoder architecture for joint entity and relation extraction[C]//Proceedings of the 2020 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2020: 8528-8535.
[18] XUE F, SUN A, ZHANG H, et al. GDPNet: refining latent multi-view graph for relation extraction[C]//Proceedings of the 2021 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2021: 14194-14202.
[19] YANG S, ZHANG Y, NIU G, et al. Entity concept-enhanced few-shot relation extraction[EB/OL]. [2024-01-14]. https://arxiv.org/abs/2106.02401.
[20] XIE Y, XU H, LI J, et al. Heterogeneous graph neural networks for noisy few-shot relation classification[J]. Knowledge-Based Systems, 2020, 194: 105548.
[21] ZHANG P, LU W. Better few-shot relation extraction with label prompt dropout[EB/OL]. [2024-01-14]. https://arxiv.org/abs/2210.13733.
[22] HE K, HUANG Y, MAO R, et al. Virtual prompt pre-training for prototype-based few-shot relation extraction[J]. Expert Systems with Applications, 2023, 213: 118927.
[23] 刘涛, 蒋国权, 刘姗姗, 等. 低资源场景事件抽取研究综述 [J]. 计算机科学, 2024, 51(2): 217-237.
LIU T, JIANG G Q, LIU S S, et al. Survey of event extraction in low-resource scenarios[J]. Computer Science, 2024, 51(2): 217-237.
[24] 李培峰, 周国栋, 朱巧明. 基于语义的中文事件触发词抽取联合模型[J]. 软件学报, 2016, 27(2): 280-294.
LI P F, ZHOU G D, ZHU Q M. Semantics-based joint model of Chinese event trigger extraction[J]. Journal of Software, 2016, 27(2): 280-294.
[25] 仲伟峰, 杨航, 陈玉博, 等. 基于联合标注和全局推理的篇章级事件抽取[J]. 中文信息学报, 2019, 33(9): 88-95.
ZHONG W F, YANG H, CHEN Y B, et al. Document-level event extraction based on joint labeling and global reasoning[J]. Journal of Chinese Information Processing, 2019, 33(9): 88-95.
[26] 朱培培, 王中卿, 李寿山, 等. 基于篇章信息和Bi-GRU的中文事件检测[J]. 计算机科学, 2020, 47(12): 233-238.
ZHU P P, WANG Z Q, LI S S, et al. Chinese event detection based on document information and Bi-GRU[J]. Computer Science, 2020, 47(12): 233-238.
[27] LIU X, HUANG H, SHI G, et al. Dynamic prefix-tuning for generative template-based event extraction[EB/OL]. [2024-01-14]. https://arxiv.org/abs/2205.06166.
[28] SNELL J, SWERSKY K, ZEMEL R. Prototypical networks for few-shot learning[C]//Advances in Neural Information Processing Systems 30, Long Beach, Dec 4-9, 2017: 4077-4087. |