
Journal of Frontiers of Computer Science and Technology ›› 2025, Vol. 19 ›› Issue (7): 1681-1698.DOI: 10.3778/j.issn.1673-9418.2409086
• Frontiers·Surveys • Previous Articles Next Articles
XIA Jianglan, LI Yanling, GE Fengpei
Online:2025-07-01
Published:2025-06-30
夏江镧,李艳玲,葛凤培
XIA Jianglan, LI Yanling, GE Fengpei. Survey of Entity Relation Extraction Based on Large Language Models[J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(7): 1681-1698.
夏江镧, 李艳玲, 葛凤培. 基于大语言模型的实体关系抽取综述[J]. 计算机科学与探索, 2025, 19(7): 1681-1698.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2409086
| [1] YOUN J, TAGKOPOULOS I. KGLM: integrating knowledge graph structure in language models for link prediction[C]//Proceedings of the 12th Joint Conference on Lexical and Computational Semantics. Stroudsburg: ACL, 2023: 217-224. [2] CHEN X, ZHANG N Y, LI L, et al. Hybrid transformer with multi-level fusion for multimodal knowledge graph completion[C]//Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2022: 904-915. [3] ZHU Y M, SUN Z W, CHENG S B, et al. Beyond triplet: leveraging the most data for multimodal machine translation[C]//Findings of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 2679-2697. [4] LUO K Q, LIN F L, LUO X S, et al. Knowledge base question answering via encoding of complex query graphs[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 2185-2194. [5] YANG Z X. Biomedical information retrieval incorporating knowledge graph for explainable precision medicine[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 2486-2486. [6] 任乐, 张仰森, 刘帅康. 基于深度学习的实体关系抽取研究综述[J]. 北京信息科技大学学报(自然科学版), 2023, 38(6): 70-79. REN L, ZHANG Y S, LIU S K. Review of research on entity relation extraction based on deep learning[J]. Journal of Beijing Information Science & Technology University (Science and Technology Edition), 2023, 38(6): 70-79. [7] KERAGHEL I, MORBIEU S, NADIF M. A survey on recent advances in named entity recognition[EB/OL]. [2024-07-04]. https://arxiv.org/html/2401.10825v1. [8] GRISHMAN R, SUNDHEIM B. Message understanding conference-6: a brief history[C]//Proceedings of the 16th Conference on Computational Linguistics, 1996: 466-471. [9] 张少伟, 王鑫, 陈子睿, 等. 有监督实体关系联合抽取方法研究综述[J]. 计算机科学与探索, 2022, 16(4): 713-733. ZHANG S W, WANG X, CHEN Z R, et al. Survey of supervised joint entity relation extraction methods[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(4): 713-733. [10] 张仰森, 刘帅康, 刘洋, 等. 基于深度学习的实体关系联合抽取研究综述[J]. 电子学报, 2023, 51(4): 1093-1116. ZHANG Y S, LIU S K, LIU Y, et al. Joint extraction of entities and relations based on deep learning: a survey[J]. Acta Electronica Sinica, 2023, 51(4): 1093-1116. [11] ZHAO X Y, DENG Y, YANG M, et al. A comprehensive survey on relation extraction: recent advances and new frontiers[J]. ACM Computing Surveys, 2024, 56(11): 1-39. [12] VASWANI A. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, 2017: 5998-6008. [13] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Advances in Neural Information Processing Systems 33, 2020: 1877-1901. [14] 鄂海红, 张文静, 肖思琪, 等. 深度学习实体关系抽取研究综述[J]. 软件学报, 2019, 30(6): 1793-1818. E H H, ZHANG W J, XIAO S Q, et al. Survey of entity relationship extraction based on deep learning[J]. Journal of Software, 2019, 30(6): 1793-1818. [15] HUANG Z H, XU W, YU K. Bidirectional LSTM-CRF models for sequence tagging[EB/OL]. [2024-07-04]. https://arxiv.org/abs/1508.01991. [16] BJ?RNE J, KAEWPHAN S, SALAKOSKI T. UTurku: drug named entity recognition and drug-drug interaction extraction using SVM classification and domain knowledge[C]//Proceedings of the 7th International Workshop on Semantic Evaluation. Stroudsburg: ACL, 2013: 651-659. [17] LAFFERTY J, MCCALLUM A, PEREIRA F. Conditional random fields: probabilistic models for segmenting and labeling sequence data[C]//Proceedings of the 18th International Conference on Machine Learning. New York: ACM, 2001: 3. [18] SAITO K, NAGATA M. Multi-language named-entity recognition system based on HMM[C]//Proceedings of the ACL 2003 Workshop on Multilingual and Mixed-Language Named Entity Recognition. Stroudsburg: ACL, 2003: 41-48. [19] LYU C, CHEN B, REN Y F, et al. Long short-term memory RNN for biomedical named entity recognition[J]. BMC Bioinformatics, 2017, 18(1): 462. [20] ZENG D H, SUN C J, LIN L, et al. LSTM-CRF for drug-named entity recognition[J]. Entropy, 2017, 19(6): 283. [21] LIMSOPATHAM N, COLLIER N. Bidirectional LSTM for named entity recognition in Twitter messages[C]//Proceedings of the 2nd Workshop on Noisy User-Generated Text. Stroudsburg: ACL, 2016: 145-152. [22] MIWA M, BANSAL M. End-to-end relation extraction using LSTMs on sequences and tree structures[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2016: 1105-1116. [23] KATIYAR A, CARDIE C. Going out on a limb: joint extraction of entity mentions and relations without dependency trees[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2017: 917-928. [24] ZHANG Y Y, CHEN Y, YU S K, et al. Bi-GRU relation extraction model based on keywords attention[J]. Data Intelligence, 2022, 4(3): 552-572. [25] SANH V, DEBUT L, CHAUMOND J, et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter[EB/OL]. [2024-07-04]. https://arxiv.org/abs/1910.01108. [26] LIU Y H, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[EB/OL]. [2024-07-04]. https://arxiv.org/abs/1907.11692. [27] DAI Z J, WANG X T, NI P, et al. Named entity recognition using BERT BiLSTM CRF for Chinese electronic health records[C]//Proceedings of the 2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics. Piscataway: IEEE, 2019: 1-5. [28] NAN G S, GUO Z J, SEKULIC I, et al. Reasoning with latent structure refinement for document-level relation extraction[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 1546-1557. [29] CHEN P, DING H B, ARAKI J, et al. Explicitly capturing relations between entity mentions via graph neural networks for domain-specific named entity recognition[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2021: 735-742. [30] MANDYA A, BOLLEGALA D, COENEN F. Graph convolution over multiple dependency sub-graphs for relation extraction[C]//Proceedings of the 28th International Conference on Computational Linguistics, 2020: 6424-6435. [31] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: open and efficient foundation language models[EB/OL]. [2024-07-05]. https://arxiv.org/abs/2302.13971. [32] TEAM G, ANIL R, BORGEAUD S, et al. Gemini: a family of highly capable multimodal models[EB/OL]. [2024-07-05]. https://arxiv.org/abs/2312.11805. [33] OPENAI, ACHIAM J, ADLER S, et al. GPT-4 technical report[EB/OL]. [2024-07-05]. https://arxiv.org/abs/2303.08774. [34] XU T, YANG H Q, ZHAO F, et al. A two-agent game for zero-shot relation triplet extraction[C]//Findings of the Association for Computational Linguistics. Stroudsburg: ACL, 2024: 7510-7527. [35] WANG J X, ZHANG L L, LEE W S, et al. When phrases meet probabilities: enabling open relation extraction with cooperating large language models[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2024: 13130-13147. [36] MIAO X, LI Y Q, ZHOU S, et al. Episodic memory retrieval from LLMs: a neuromorphic mechanism to generate commonsense counterfactuals for relation extraction[C]//Findings of the Association for Computational Linguistics. Stroudsburg: ACL, 2024: 2489-2511. [37] ZHANG F, MIAO Q, CHENG J W, et al. SRF: enhancing document-level relation extraction with a novel secondary reasoning framework[C]//Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2024: 15426-15439. [38] FAN S D, WANG Y T, MO S S, et al. LogicST: a logical self-training framework for document-level relation extraction with incomplete annotations[C]//Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2024: 5496-5510. [39] TRAN Q, THANH N X, ANH N H, et al. Preserving generalization of language models in few-shot continual relation extraction[C]//Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2024: 13771-13784. [40] ZHAO J L, XU C H, JIANG B. IPED: an implicit perspective for relational triple extraction based on diffusion model[C]//Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2024: 2080-2092. [41] LUO D, GAN Y L, HOU R, et al. Synergistic anchored contrastive pre-training for few-shot relation extraction[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38(17): 18742-18750. [42] JAIN M, MUTHARAJU R, KAVULURU R, et al. Revisiting document-level relation extraction with context-guided link prediction[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38(16): 18327-18335. [43] LI G Z, WANG P, KE W J, et al. Recall, retrieve and reason: towards better in-context relation extraction[C]//Proceedings of the 33rd International Joint Conference on Artificial Intelligence, 2024: 6368-6376. [44] XU X L, LI C B, XIANG H L, et al. Attention based document-level relation extraction with none class ranking loss[C]//Proceedings of the 33rd International Joint Conference on Artificial Intelligence, 2024: 6569-6577. [45] HUANG L, YU W J, MA W T, et al. A survey on hallucination in large language models: principles, taxonomy, challenges, and open questions[EB/OL]. [2024-07-05]. https://arxiv.org/abs/2311.05232. [46] HOULSBY N, GIURGIU A, JASTRZEBSKI S, et al. Parameter efficient transfer learning for NLP[C]//Proceedings of the 36th International Conference on Machine Learning, 2019: 2790-2799. [47] LIU X, JI K X, FU Y C, et al. P-Tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks[EB/OL]. [2024-07-05]. https://arxiv.org/abs/2110.07602. [48] WANG H Q, PING B W, WANG S, et al. LoRA-flow: dynamic LoRA fusion for large language models in generative tasks[EB/OL]. [2024-07-05]. https://arxiv.org/abs/2402.11455. [49] LIU Y, DAI F, GU X, et al. Domain-aware and co-adaptive feature transformation for domain adaption few-shot relation extraction[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Stroudsburg: ACL, 2024: 5275-5285. [50] SAHOO P, SINGH A K, SAHA S, et al. A systematic survey of prompt engineering in large language models: techniques and applications[EB/OL]. [2024-07-05]. https://arxiv. org/abs/2402.07927. [51] MIN S, LYU X X, HOLTZMAN A, et al. Rethinking the role of demonstrations: what makes in-context learning work? [EB/OL]. [2024-07-05]. https://arxiv.org/abs/2202.12837. [52] CHIA Y K, BING L D, PORIA S, et al. RelationPrompt: leveraging prompts to generate synthetic data for zero-shot relation triplet extraction[EB/OL]. [2024-07-09]. https://arxiv.org/abs/2203.09101. [53] WAN Z, CHENG F, MAO Z Y, et al. GPT-RE: in-context learning for relation extraction using large language models[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 3534-3547. [54] HAN P, PEREIRA L K, CHENG F, et al. Enhancing in-context learning with semantic representations for relation extraction [EB/OL]. [2024-07-09]. https://arxiv.org/html/2406.10432v1. [55] LI G, KE W, WANG P, et al. Unlocking instructive in-context learning with tabular prompting for relational triple extraction[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Stroudsburg: ACL, 2024: 17131-17143. [56] LI G Z, WANG P, LIU J J, et al. Meta in-context learning makes large language models better zero and few-shot relation extractors[EB/OL]. [2024-07-09]. https://arxiv.org/abs/2404.17807. [57] WEI X, CUI X Y, CHENG N, et al. ChatIE: zero-shot information extraction via chatting with ChatGPT[EB/OL]. [2024- 07-09]. https://arxiv.org/abs/2302.10205. [58] KOJIMA T, GU S S, REID M, et al. Large language models are zero-shot reasoners[C]//Advances in Neural Information Processing Systems 35, 2022: 22199-22213. [59] CHU Z, CHEN J C, CHEN Q L, et al. Navigate through enigmatic labyrinth a survey of chain of thought reasoning: advances, frontiers and future[EB/OL]. [2024-07-09]. https:// arxiv.org/abs/2309.15402. [60] WANG X Z, WEI J, SCHUURMANS D, et al. Self-consistency improves chain of thought reasoning in language models[EB/OL]. [2024-07-09]. https://arxiv.org/abs/2203.11171. [61] ZHANG Z S, ZHANG A, LI M, et al. Automatic chain of thought prompting in large language models[EB/OL]. [2024- 07-09]. https://arxiv.org/abs/2210.03493. [62] FU Y, PENG H, SABHARWAL A, et al. Complexity-based prompting for multi-step reasoning[C]//Proceedings of the 11th International Conference on Learning Representations, 2023. [63] LIU Y M, PENG X Y, DU T Y, et al. ERA-CoT: improving chain-of-thought through entity relationship analysis[EB/OL]. [2024-07-09]. https://arxiv.org/abs/2403.06932. [64] SUN Q, HUANG K, YANG X C, et al. Consistency guided knowledge retrieval and denoising in LLMs for zero-shot document-level relation triplet extraction[EB/OL]. [2024-07-09]. https://arxiv.org/abs/2401.13598. [65] MA X L, LI J, ZHANG M. Chain of thought with explicit evidence reasoning for few-shot relation extraction[C]// Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg: ACL, 2023: 2334-2352. [66] LU Y X, HONG Y, WANG Z P, et al. Enhancing reasoning capabilities by instruction learning and chain-of-thoughts for implicit discourse relation recognition[C]//Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg: ACL, 2023: 5634-5640. [67] LI P, SUN T X, TANG Q, et al. CodeIE: large code generation models are better few-shot information extractors[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 15339-15353. [68] CHEN M, TWOREK J, JUN H, et al. Evaluating large language models trained on code[EB/OL]. [2024-07-09]. https:// arxiv.org/abs/2107.03374. [69] LIU M Q, HUANG L F. Teamwork is not always good: an empirical study of classifier drift in class-incremental information extraction[C]//Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg: ACL, 2023: 2241-2257. [70] DING Z, HUANG W, LIANG J, et al. Improving recall of large language models: a model collaboration approach for relational triple extraction[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Stroudsburg: ACL, 2024: 8890-8901. [71] XIE T Y, ZHANG J, ZHANG Y, et al. Retrieval augmented instruction tuning for open NER with large language models[EB/OL]. [2024-07-09]. https://arxiv.org/abs/2406.17305. [72] ZHANG Z, YANG Y, CHEN B. Prompt tuning for few-shot relation extraction via modeling global and local graphs[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Stroudsburg: ACL, 2024: 13233-13242. [73] LIU S Y, LI Y, LI J, et al. Unleashing the power of large language models in zero-shot relation extraction via self-prompting[C]//Findings of the Association for Computational Linguistics: EMNLP 2024. Stroudsburg: ACL, 2024: 13147- 13161. [74] JIANG Y, LI J, CHEN H. Relation classification via bidirectional prompt learning with data augmentation by large language model[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Stroudsburg: ACL, 2024: 13885-13897. [75] MA S, HAN J, LIANG Y, et al. Making pre-trained language models better continual few-shot relation extractors[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Stroudsburg: ACL, 2024: 10970-10983. [76] MCCANN B, KESKAR N S, XIONG C M, et al. The natural language decathlon: multitask learning as question answering [EB/OL]. [2024-07-09]. https://arxiv.org/abs/1806.08730. [77] LEVY O, SEO M, CHOI E, et al. Zero-shot relation extraction via reading comprehension[EB/OL]. [2024-07-09]. https:// arxiv.org/abs/1706.04115. [78] COHEN A D, ROSENMAN S, GOLDBERG Y. Relation classification as two-way span-prediction[EB/OL]. [2024-07-09]. https://arxiv.org/abs/2010.04829. [79] LI X Y, YIN F, SUN Z J, et al. Entity-relation extraction as multi-turn question answering[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 1340-1350. [80] YAMADA K, MIWA M, SASAKI Y. Biomedical relation extraction with entity type markers and relation-specific question answering[C]//Proceedings of the 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks. Stroudsburg: ACL, 2023: 377-384. [81] NAJAFI S, FYSHE A. Weakly-supervised questions for zero-shot relation extraction[EB/OL]. [2024-07-09]. https://arxiv.org/abs/2301.09640. [82] HENDRICKX I, KIM S N, KOZAREVA Z, et al. SemEval-2010 Task 8: multi-way classification of semantic relations between pairs of nominals[EB/OL]. [2024-07-09]. https://arxiv.org/abs/1911.10422. [83] DODDINGTON G, MITCHELL A, PRZYBOCKI M, et al. The automatic content extraction (ACE) program - tasks, data, and evaluation[C]//Proceedings of the 4th International Conference on Language Resources and Evaluation. Stroudsburg: ACL, 2004: 837-840. [84] SANG E F, DE MEULDER F. Introduction to the CoNLL-2003 shared task: language-independent named entity recognition[EB/OL]. [2024-07-09]. https://arxiv.org/abs/cs/0306050. [85] YAO Y, DU J J, LIN Y K, et al. CodRED: a cross-document relation extraction dataset for acquiring knowledge in the wild[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2021: 4452-4472. [86] GARDENT C, SHIMORINA A, NARAYAN S, et al. The WebNLG challenge: generating text from RDF data[C]//Proceedings of the 10th International Conference on Natural Language Generation. Stroudsburg: ACL, 2017: 124-133. [87] ALT C, GABRYSZAK A, HENNIG L. TACRED revisited: a thorough evaluation of the TACRED relation extraction task[EB/OL]. [2024-07-09]. https://arxiv.org/abs/2004.14855. [88] LUAN Y, HE L H, OSTENDORF M, et al. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction[EB/OL]. [2024-07-09]. https:// arxiv.org/abs/1808.09602. [89]DODDINGTON G R, MITCHELL A, PRZYBOCKI M A, et al. The automatic content extraction (ACE) program-tasks, data, and evaluation[C]//Proceedings of the 4th International Conference on Language Resources and Evaluation. Stroudsburg: ACL, 2004: 837-840. [90] HOU Y T, CHE W X, LAI Y K, et al. Few-shot slot tagging with collapsed dependency transfer and label-enhanced task- adaptive projection network[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 1381-1393. [91] GAO C F, WANG X, SUN J M. TTM-RE: memory-augmented document-level relation extraction[EB/OL]. [2024-07-10]. https://arxiv.org/abs/2406.05906. [92] MENG S, HU X M, LIU A W, et al. On the robustness of document-level relation extraction models to entity name variations[EB/OL]. [2024-07-10]. https://arxiv.org/abs/2406. 07444. [93] SCHULHOFF S. Learning prompt [EB/OL]. [2024-07-10]. https://learnprompting.org/docs/prompt_hacking/defensive_ measures/instruction. [94] JAIN N, SCHWARZSCHILD A, WEN Y X, et al. Baseline defenses for adversarial attacks against aligned language models[EB/OL]. [2024-07-10]. https://arxiv.org/abs/2309.00614. [95] YAN S N, WANG S, DUAN Y, et al. An LLM-assisted easy-to-trigger backdoor attack on code completion models: injecting disguised vulnerabilities against strong detection[EB/OL]. [2024-07-10]. https://arxiv.org/abs/2406.06822. [96] ANIL R, GHAZI B, GUPTA V, et al. Large-scale differentially private BERT[EB/OL]. [2024-07-10]. https://arxiv.org/ abs/2108.01624. [97] PHONG L T, AONO Y, HAYASHI T, et al. Privacy-preserving deep learning via additively homomorphic encryption[J]. IEEE Transactions on Information Forensics and Security, 2018, 13(5): 1333-1345. [98] LIU K, DOLAN-GAVITT B, GARG S. Fine-pruning: defending against backdooring attacks on deep neural networks [C]//Proceedings of the 21st International Symposium on Research in Attacks, Intrusions, and Defenses. Cham: Springer, 2018: 273-294. [99] HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. [2024-07-10]. https://arxiv. org/abs/1503.02531. [100] DAS B C, AMINI M H, WU Y Z. Security and privacy challenges of large language models: a survey[EB/OL]. [2024-07-10]. https://arxiv.org/abs/2402.00888. |
| [1] | TIAN Chongteng, LIU Jing, WANG Xiaoyan, LI Ming. Review of Application of Large Language Models GPT in Medical Text [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(8): 2043-2056. |
| [2] | WANG Jintao, MENG Qixiang, GAO Zhilin, BU Fanliang. Research on Case Information Element Extraction Method Based on Instruction Fine-Tuning of Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(8): 2161-2173. |
| [3] | XU Delong, LIN Min, WANG Yurong, ZHANG Shujun. Survey of NLP Data Augmentation Methods Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1395-1413. |
| [4] | ZHANG Xin, SUN Jingchao. Review of False Information Detection Frameworks Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(6): 1414-1436. |
| [5] | LI Juhao, SHI Lei, DING Meng, LEI Yongsheng, ZHAO Dongyue, CHEN Long. Social Media Text Stance Detection Based on Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(5): 1302-1312. |
| [6] | CHANG Baofa, CHE Chao, LIANG Yan. Research on Recommendation Model Based on Multi-round Dialogue of Large Language Model [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(2): 385-395. |
| [7] | XU Lei, HU Yahao, CHEN Man, CHEN Jun, PAN Zhisong. Hate Speech Detection Method Integrating Prefix Tuning and Prompt Learning [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 97-106. |
| [8] | LI Boxin. Method of Retrieval-Augmented Large Language Models with Stable Outputs for Private Question-Answering Systems [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 132-140. |
| [9] | WANG Yong, QIN Jiajun, HUANG Yourui, DENG Jiangzhou. Design of University Research Management Question Answering System Integrating Knowledge Graph and Large Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2025, 19(1): 107-117. |
| [10] | LYU Haixiao, LI Yihong, ZHOU Xiaoyi. Few-Shot Named Entity Recognition with Prefix-Tuning [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(8): 2180-2189. |
| [11] | CHEN Zhongyong, HUANG Yongsheng, ZHANG Min, JIANG Ming. Study on Entity Extraction Method for Pharmaceutical Instructions Based on Pretrained Models [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(7): 1911-1922. |
| [12] | QIU Yunfei, XING Haoran, YU Zhilong, ZHANG Wenwen. Nested Named Entity Recognition Combining Multi-modal and Multi-span Features [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(6): 1613-1626. |
| [13] | ZHAO Honglei, TANG Huanling, ZHANG Yu, SUN Xueyuan, LU Mingyu. Named Entity Recognition Model Based on k-best Viterbi Decoupling Knowledge Distillation [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(3): 780-794. |
| [14] | TANG Ruixue, QIN Yongbin, CHEN Yanping. Named Entity Recognition Based on Multi-scale Attention [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(2): 506-515. |
| [15] | CHANG Yu, WANG Gang, ZHU Peng, KONG Lingfei, HE Jingheng. Survey of Research on Construction Method of Industry Internet Security Knowledge Graph [J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(2): 279-300. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/