Journal of Frontiers of Computer Science and Technology ›› 2022, Vol. 16 ›› Issue (9): 1954-1968.DOI: 10.3778/j.issn.1673-9418.2112109
• Surveys and Frontiers • Previous Articles Next Articles
LI Dongmei1,2, LUO Sisi1,2, ZHANG Xiaoping3,+(), XU Fu1,2
Received:
2021-12-29
Revised:
2022-04-29
Online:
2022-09-01
Published:
2022-09-15
About author:
LI Dongmei, born in 1972, Ph.D., professor. Her research interests include natural language processing and knowledge graph.Supported by:
李冬梅1,2, 罗斯斯1,2, 张小平3,+(), 许福1,2
通讯作者:
+ E-mail: xiao_ping_zhang@139.com作者简介:
李冬梅(1972—),女,博士,教授,主要研究方向为自然语言处理、知识图谱。基金资助:
CLC Number:
LI Dongmei, LUO Sisi, ZHANG Xiaoping, XU Fu. Review on Named Entity Recognition[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(9): 1954-1968.
李冬梅, 罗斯斯, 张小平, 许福. 命名实体识别方法研究综述[J]. 计算机科学与探索, 2022, 16(9): 1954-1968.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2112109
数据集 | 时间 | 语言 | 语料来源 | 实体类型数量 | URL |
---|---|---|---|---|---|
MUC-6 MUC-7 | 1996 1997 | 英文 | 新闻 | 共7类:人名、地名、机构名、日期、时间、货币和百分比 | |
1998年人民日报数据集 | 1998 | 中文 | 人民日报 | 共3类:人名、地名和机构名 | |
CoNLL2002- 2003 | 2002—2003 | 英文、德文 | 新闻 | 共4类:人名、地名、机构名和其他实体 | |
GENIA | 2004 | 英文 | 生物学和临床文本 | 共36个细粒度实体类型 | |
ACE2004- 2005 | 2004—2005 | 英文、阿拉伯文、中文 | 新闻、博客 | 共7类:人名、地名、机构名、地理政治、设施、交通工具和武器 | |
MSRA | 2006 | 中文 | 新闻 | 共3类:人名、地名和机构名 | |
OntoNotes5.0 | 2013 | 英文、阿拉伯文、中文 | 新闻、博客、 宗教文本 | 共18类:人名、地名、机构名、地理政治、设施、产品、事件实体等 | |
BosonNLP NER | 2014 | 中文 | 网络文本 | 共6类:人名、地名、机构名、时间、公司名和产品实体 | |
NCBI Disease | 2014 | 英文 | PubMed摘要 | 共790个细粒度实体类型 | |
BC5CDR | 2015 | 英文 | PubMed摘要 | 共3类:化学物质、疾病名和化学-疾病相互作用 | |
CCKS2017 | 2017 | 中文 | 电子病历 | 共5类:症状体征、检查检验、疾病诊断、治疗和身体部位 | |
Weibo NER | 2018 | 中文 | 博客 | 共4类:人名、地名、机构名和地理-政治 | |
Chinese resume | 2018 | 中文 | 简历 | 共8类:人名、地名、机构名、国家、教育机构、职业、种族和职称 | |
CCKS2018 | 2018 | 中文 | 电子病历 | 共5类:解剖部位、症状描述、独立症状、药物和手术 | |
CCKS2019-2020 | 2019—2020 | 中文 | 电子病历 | 共6类:疾病和诊断、检查、检验、药物、手术以及解剖部位 | |
CLUENER2020 | 2020 | 中文 | 新闻 | 共10类:人名、地名、机构名、公司、政府等 | |
Few-NERD | 2021 | 英文 | 维基百科 | 共8个粗粒度、66个细粒度 |
Table 1 Summary of NER datasets
数据集 | 时间 | 语言 | 语料来源 | 实体类型数量 | URL |
---|---|---|---|---|---|
MUC-6 MUC-7 | 1996 1997 | 英文 | 新闻 | 共7类:人名、地名、机构名、日期、时间、货币和百分比 | |
1998年人民日报数据集 | 1998 | 中文 | 人民日报 | 共3类:人名、地名和机构名 | |
CoNLL2002- 2003 | 2002—2003 | 英文、德文 | 新闻 | 共4类:人名、地名、机构名和其他实体 | |
GENIA | 2004 | 英文 | 生物学和临床文本 | 共36个细粒度实体类型 | |
ACE2004- 2005 | 2004—2005 | 英文、阿拉伯文、中文 | 新闻、博客 | 共7类:人名、地名、机构名、地理政治、设施、交通工具和武器 | |
MSRA | 2006 | 中文 | 新闻 | 共3类:人名、地名和机构名 | |
OntoNotes5.0 | 2013 | 英文、阿拉伯文、中文 | 新闻、博客、 宗教文本 | 共18类:人名、地名、机构名、地理政治、设施、产品、事件实体等 | |
BosonNLP NER | 2014 | 中文 | 网络文本 | 共6类:人名、地名、机构名、时间、公司名和产品实体 | |
NCBI Disease | 2014 | 英文 | PubMed摘要 | 共790个细粒度实体类型 | |
BC5CDR | 2015 | 英文 | PubMed摘要 | 共3类:化学物质、疾病名和化学-疾病相互作用 | |
CCKS2017 | 2017 | 中文 | 电子病历 | 共5类:症状体征、检查检验、疾病诊断、治疗和身体部位 | |
Weibo NER | 2018 | 中文 | 博客 | 共4类:人名、地名、机构名和地理-政治 | |
Chinese resume | 2018 | 中文 | 简历 | 共8类:人名、地名、机构名、国家、教育机构、职业、种族和职称 | |
CCKS2018 | 2018 | 中文 | 电子病历 | 共5类:解剖部位、症状描述、独立症状、药物和手术 | |
CCKS2019-2020 | 2019—2020 | 中文 | 电子病历 | 共6类:疾病和诊断、检查、检验、药物、手术以及解剖部位 | |
CLUENER2020 | 2020 | 中文 | 新闻 | 共10类:人名、地名、机构名、公司、政府等 | |
Few-NERD | 2021 | 英文 | 维基百科 | 共8个粗粒度、66个细粒度 |
方法 | 时间 | 支持语言 | 数据集 | F1值/% | 方法关键字 |
---|---|---|---|---|---|
Krupka[ | 1995 | 英文 | MUC-6 | 96.42 | 人名和地名规则 |
Shaalan等[ | 2009 | 阿拉伯文 | ACE、政府提供以及网上数据构成的数据集 | 92.26 | 地名词典+规则 |
张小衡等[ | 1997 | 中文 | 香港理工大学的三地现代汉语数据集 | 97.30 | 机构名规则 |
王宁等[ | 2002 | 中文 | 互联网金融新闻构成的数据集 | 89.13 | 公司名规则 |
Table 2 Summary of mainstream NER methods based on rules and dictionaries
方法 | 时间 | 支持语言 | 数据集 | F1值/% | 方法关键字 |
---|---|---|---|---|---|
Krupka[ | 1995 | 英文 | MUC-6 | 96.42 | 人名和地名规则 |
Shaalan等[ | 2009 | 阿拉伯文 | ACE、政府提供以及网上数据构成的数据集 | 92.26 | 地名词典+规则 |
张小衡等[ | 1997 | 中文 | 香港理工大学的三地现代汉语数据集 | 97.30 | 机构名规则 |
王宁等[ | 2002 | 中文 | 互联网金融新闻构成的数据集 | 89.13 | 公司名规则 |
模型 | 原理 | 优点 | 缺点 | 代表文献 |
---|---|---|---|---|
HMM | 对转移概率和表现概率直接建模,统计共现概率 | 时间复杂度低 | 准确率比MEM略低 | [ |
MEM | 对转移概率和表现概率建立联合概率,统计的是条件概率 | 准确率比HMM高 | 时间复杂度高 | [ |
SVM | 特征空间上的间隔最大的线性分类器 | 利用内积核函数代替向高维空间的非线性映射 | 大规模样本训练和多分类效果差 | [ |
CRF | 统计全局概率,不仅在局部归一化,考虑数据在全局的分布 | 考虑数据的全局分布,解决了标注偏置问题 | 时间复杂度高 | [ |
Table 3 Comparison of NER methods for supervised machine learning
模型 | 原理 | 优点 | 缺点 | 代表文献 |
---|---|---|---|---|
HMM | 对转移概率和表现概率直接建模,统计共现概率 | 时间复杂度低 | 准确率比MEM略低 | [ |
MEM | 对转移概率和表现概率建立联合概率,统计的是条件概率 | 准确率比HMM高 | 时间复杂度高 | [ |
SVM | 特征空间上的间隔最大的线性分类器 | 利用内积核函数代替向高维空间的非线性映射 | 大规模样本训练和多分类效果差 | [ |
CRF | 统计全局概率,不仅在局部归一化,考虑数据在全局的分布 | 考虑数据的全局分布,解决了标注偏置问题 | 时间复杂度高 | [ |
方法 | 实现方法 | 域泛化能力 | 优点 | 缺点 | 改进方法 |
---|---|---|---|---|---|
有监督学习 | 分类 | 最弱 | 充分利用先验知识,针对特定的域 | 需要大量的标注数据,可移植性差 | 增加特征,增加标记语料 |
半监督学习 | 分类 | 较强 | 需要少量的语料 | 需要大量的分析和后期处理 | 扩展模式,减少噪音 |
无监督学习 | 聚类 | 最强 | 不需要标注语料库,用于大规模未标注语料 | 需要提前确定聚类阈值,性能较低 | 扩展特征,改善聚类 |
Table 4 Comparison of NER methods for supervised, semi-supervised and unsupervised
方法 | 实现方法 | 域泛化能力 | 优点 | 缺点 | 改进方法 |
---|---|---|---|---|---|
有监督学习 | 分类 | 最弱 | 充分利用先验知识,针对特定的域 | 需要大量的标注数据,可移植性差 | 增加特征,增加标记语料 |
半监督学习 | 分类 | 较强 | 需要少量的语料 | 需要大量的分析和后期处理 | 扩展模式,减少噪音 |
无监督学习 | 聚类 | 最强 | 不需要标注语料库,用于大规模未标注语料 | 需要提前确定聚类阈值,性能较低 | 扩展特征,改善聚类 |
方法 | 时间 | 支持语言 | 数据集 | F1值/% | 方法关键字 |
---|---|---|---|---|---|
Borthwick等[ | 1998 | 英文 | MUC-7 | 92.05 | MEM、知识库 |
Bikel等[ | 1999 | 英文 | MUC-6 | 94.92 | HMM |
Zhou等[ | 2002 | 英文 | MUC-6、MUC-7 | 96.60、94.10 | HMM、特征扩充 |
Isozaki等[ | 2002 | 英文 | General数据集 | 90.03 | SVM |
Bender等[ | 2003 | 英文 | CoNLL2003 | 89.26 | MEM |
McCallum等[ | 2003 | 英文 | CoNLL2003 | 84.04 | 单层CRF |
张华平等[ | 2004 | 中文 | 人民日报数据集 | 人名95.40 | HMM、角色标注 |
俞鸿魁等[ | 2006 | 中文 | 人民日报数据集 | 均值91.20 | 层叠HMM |
Krishnan等[ | 2006 | 英文 | CoNLL2003 | 87.24 | 双层CRF |
Nadeau等[ | 2006 | 英文 | MUC-7 | 69.33 | 无监督学习、地名词典 |
张玥杰等[ | 2008 | 中文 | SIGHAN 2008 | 87.92 | MEM、规则 |
李丽双等[ | 2007 | 中文 | 人民日报数据集 | 90.12 | SVM |
陈霄等[ | 2008 | 中文 | 人民日报数据集 | 84.18 | SVM |
冯元勇等[ | 2008 | 中文 | 863简体NER评测数据集 | 88.76 | 单层CRF、尾字特征 |
Teixeira等[ | 2011 | 英文 | HAREM数据集 | 68.00 | 自举方法、CRF |
黄诗琳等[ | 2013 | 中文 | 新闻和网页文档构成的数据集 | 78.20 | 自举方法、规则和词典、CRF |
燕杨等[ | 2014 | 中文 | 临床医院65分电子病历数据集 | 97.02 | 层叠CRF |
Thenmalar等[ | 2015 | 英文 | CoNLL2003 | 82.57 | 自举方法 |
Long等[ | 2014 | 中文 | 医生相关的文本构成的数据集 | 均值94.75 | 自举方法 |
王路路等[ | 2018 | 维吾尔文 | 新疆多语种信息技术实验室提供的数据集 | 87.43 | 半监督学习、CRF |
Han等[ | 2016 | 英文 | BioCreative提供的数据集 | 81.40 | 聚类方法、主动学习 |
Table 5 Summary of mainstream NER models for statistical machine learning
方法 | 时间 | 支持语言 | 数据集 | F1值/% | 方法关键字 |
---|---|---|---|---|---|
Borthwick等[ | 1998 | 英文 | MUC-7 | 92.05 | MEM、知识库 |
Bikel等[ | 1999 | 英文 | MUC-6 | 94.92 | HMM |
Zhou等[ | 2002 | 英文 | MUC-6、MUC-7 | 96.60、94.10 | HMM、特征扩充 |
Isozaki等[ | 2002 | 英文 | General数据集 | 90.03 | SVM |
Bender等[ | 2003 | 英文 | CoNLL2003 | 89.26 | MEM |
McCallum等[ | 2003 | 英文 | CoNLL2003 | 84.04 | 单层CRF |
张华平等[ | 2004 | 中文 | 人民日报数据集 | 人名95.40 | HMM、角色标注 |
俞鸿魁等[ | 2006 | 中文 | 人民日报数据集 | 均值91.20 | 层叠HMM |
Krishnan等[ | 2006 | 英文 | CoNLL2003 | 87.24 | 双层CRF |
Nadeau等[ | 2006 | 英文 | MUC-7 | 69.33 | 无监督学习、地名词典 |
张玥杰等[ | 2008 | 中文 | SIGHAN 2008 | 87.92 | MEM、规则 |
李丽双等[ | 2007 | 中文 | 人民日报数据集 | 90.12 | SVM |
陈霄等[ | 2008 | 中文 | 人民日报数据集 | 84.18 | SVM |
冯元勇等[ | 2008 | 中文 | 863简体NER评测数据集 | 88.76 | 单层CRF、尾字特征 |
Teixeira等[ | 2011 | 英文 | HAREM数据集 | 68.00 | 自举方法、CRF |
黄诗琳等[ | 2013 | 中文 | 新闻和网页文档构成的数据集 | 78.20 | 自举方法、规则和词典、CRF |
燕杨等[ | 2014 | 中文 | 临床医院65分电子病历数据集 | 97.02 | 层叠CRF |
Thenmalar等[ | 2015 | 英文 | CoNLL2003 | 82.57 | 自举方法 |
Long等[ | 2014 | 中文 | 医生相关的文本构成的数据集 | 均值94.75 | 自举方法 |
王路路等[ | 2018 | 维吾尔文 | 新疆多语种信息技术实验室提供的数据集 | 87.43 | 半监督学习、CRF |
Han等[ | 2016 | 英文 | BioCreative提供的数据集 | 81.40 | 聚类方法、主动学习 |
方法 | 代表模型 | 优点 | 缺点 |
---|---|---|---|
有监督深度学习方法 | CNN | 数据处理并行化,对高维数据处理无压力 | 无法很好地提取序列信息 |
RNN | 解决了CNN无法记忆上下文信息的问题 | 会出现梯度消失和梯度爆炸现象 | |
LSTM | 引入了输入门、输出门和遗忘门,解决了RNN长期依赖的问题,有效地学习长期依赖信息 | 梯度问题未完全解决 | |
GNN | 利用图数据结构的数据处理模型,可以更高效地挖掘实体之间的联系 | 模型灵活性和扩展性差 | |
远程监督深度学习方法 | — | 一定程度解决了需要大规模已标注数据问题 | 会产生不完全标注和噪音标注 |
Transformer方法 | BERT类 | 采用掩码语言模型对双向的Transformer进行预训练,以生成深层的双向语言表征 | 需要大量GPU和训练数据 |
提示学习的方法 | — | 重新定义语言模型的输入,减少预训练模型和下游任务的差距,在低资源场景性能良好 | 计算复杂度高,需要人工设计提示 |
Table 6 Comparison of NER methods for deep learning
方法 | 代表模型 | 优点 | 缺点 |
---|---|---|---|
有监督深度学习方法 | CNN | 数据处理并行化,对高维数据处理无压力 | 无法很好地提取序列信息 |
RNN | 解决了CNN无法记忆上下文信息的问题 | 会出现梯度消失和梯度爆炸现象 | |
LSTM | 引入了输入门、输出门和遗忘门,解决了RNN长期依赖的问题,有效地学习长期依赖信息 | 梯度问题未完全解决 | |
GNN | 利用图数据结构的数据处理模型,可以更高效地挖掘实体之间的联系 | 模型灵活性和扩展性差 | |
远程监督深度学习方法 | — | 一定程度解决了需要大规模已标注数据问题 | 会产生不完全标注和噪音标注 |
Transformer方法 | BERT类 | 采用掩码语言模型对双向的Transformer进行预训练,以生成深层的双向语言表征 | 需要大量GPU和训练数据 |
提示学习的方法 | — | 重新定义语言模型的输入,减少预训练模型和下游任务的差距,在低资源场景性能良好 | 计算复杂度高,需要人工设计提示 |
方法 | 时间 | 支持语言 | 数据集 | F1值/% | 方法关键字 |
---|---|---|---|---|---|
Collobert等[ | 2011 | 英文 | RCV1 | 89.59 | CNN-CRF、Gazetteer |
Huang等[ | 2015 | 英文 | CoNLL2003 | 90.10 | Bi-LSTM-CRF |
Strubell等[ | 2017 | 英文 | CoNLL2003、OntoNotes5.0 | 90.65、86.84 | ID-CNNs、RUG |
Cetoli等[ | 2018 | 英文 | OntoNotes5.0 | 83.60 | Bi-LSTM-GCN-CRF |
Gregoric等[ | 2018 | 英文 | CoNLL2003 | 91.48 | Multiple independent bidirectional LSTM units、Softmax |
Zhang等[ | 2018 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 73.88、93.18、58.79、94.46 | Lattice-LSTM |
Yang等[ | 2020 | 中文 | E-commerce-NER、NEWS NER | 61.45、79.22 | Distantly supervised、Bi-LSTM-MLPCRF |
Gui等[ | 2019 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 74.45、93.71、59.92、95.11 | Lexicon rethinking、CNN |
Liu等[ | 2019 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 74.43、93.74、59.84、95.21 | WC-LSTM |
Ding等[ | 2019 | 中文 | OntoNotes4.0、MSRA、Weibo NER、E-commerce-NER | 76.00、94.40、59.50、75.20 | GGNN-LSTM-CRF |
Gui等[ | 2019 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 74.89、93.46、60.21、95.37 | Lexicon-based、GCN |
Wu等[ | 2019 | 中文 | Bakeoff-3、Bakeoff-4 | 89.42、90.18 | CNN-LSTM-CRF、Joint training |
Peng等[ | 2019 | 英文 | CoNLL2003、CoNLL2002、Twitter | 82.94、75.85、59.36 | Positive-unlabeled learning、Distantly supervised |
Tang等[ | 2020 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 75.87、94.40、63.63、95.53 | Word-character representation、GCN |
Ma等[ | 2020 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 82.81、95.42、70.50、96.11 | SoftLexicon、Bi-LSTM |
李妮等[ | 2020 | 中文 | MSRA | 94.42 | BERT-IDCNN-CRF |
Li等[ | 2020 | 中文 | CCKS2017、CCKS2018 | 91.60、89.56 | BERT-Bi-LSTM-CRF |
Kong等[ | 2021 | 中文 | CCKS2017、CCKS2019 | 90.49、85.13 | Multi-Level CNN、Attention Mechanism |
Li等[ | 2021 | 英文 | CoNLL2003 | 92.53 | Modularized interaction network、RNN-BiLSTM-CRF |
Xu等[ | 2021 | 英文 | ACE2004、ACE2005、GENIA | 86.30、85.40、79.60 | Multi-head self-attention、BERT |
Naseem等[ | 2021 | 英文 | NCBI Disease、BC5CDR | 97.18、97.78 | BioALBERT |
Yang等[ | 2021 | 英文 | ACE2004、ACE2005、GENIA | 87.88、87.04、79.08 | Hierarchical transformer |
Wu等[ | 2021 | 中文 | CCKS2017、CCKS2019 | 93.26、82.87 | RoBERTa、Radical-level feature |
Cui等[ | 2021 | 英文 | CoNLL2003 | 92.55 | BART、Multi-template |
Chen等[ | 2021 | 英文 | CoNLL2003 | 93.90 | Prompt-guided attention |
Ma等[ | 2021 | 英文 | CoNLL2003、OntoNotes5.0 | 74.80、72.99 | Template-free、Prompt tuning |
Table 7 Summary of mainstream NER models for deep learning
方法 | 时间 | 支持语言 | 数据集 | F1值/% | 方法关键字 |
---|---|---|---|---|---|
Collobert等[ | 2011 | 英文 | RCV1 | 89.59 | CNN-CRF、Gazetteer |
Huang等[ | 2015 | 英文 | CoNLL2003 | 90.10 | Bi-LSTM-CRF |
Strubell等[ | 2017 | 英文 | CoNLL2003、OntoNotes5.0 | 90.65、86.84 | ID-CNNs、RUG |
Cetoli等[ | 2018 | 英文 | OntoNotes5.0 | 83.60 | Bi-LSTM-GCN-CRF |
Gregoric等[ | 2018 | 英文 | CoNLL2003 | 91.48 | Multiple independent bidirectional LSTM units、Softmax |
Zhang等[ | 2018 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 73.88、93.18、58.79、94.46 | Lattice-LSTM |
Yang等[ | 2020 | 中文 | E-commerce-NER、NEWS NER | 61.45、79.22 | Distantly supervised、Bi-LSTM-MLPCRF |
Gui等[ | 2019 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 74.45、93.71、59.92、95.11 | Lexicon rethinking、CNN |
Liu等[ | 2019 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 74.43、93.74、59.84、95.21 | WC-LSTM |
Ding等[ | 2019 | 中文 | OntoNotes4.0、MSRA、Weibo NER、E-commerce-NER | 76.00、94.40、59.50、75.20 | GGNN-LSTM-CRF |
Gui等[ | 2019 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 74.89、93.46、60.21、95.37 | Lexicon-based、GCN |
Wu等[ | 2019 | 中文 | Bakeoff-3、Bakeoff-4 | 89.42、90.18 | CNN-LSTM-CRF、Joint training |
Peng等[ | 2019 | 英文 | CoNLL2003、CoNLL2002、Twitter | 82.94、75.85、59.36 | Positive-unlabeled learning、Distantly supervised |
Tang等[ | 2020 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 75.87、94.40、63.63、95.53 | Word-character representation、GCN |
Ma等[ | 2020 | 中文 | OntoNotes4.0、MSRA、Weibo NER、Chinese resume | 82.81、95.42、70.50、96.11 | SoftLexicon、Bi-LSTM |
李妮等[ | 2020 | 中文 | MSRA | 94.42 | BERT-IDCNN-CRF |
Li等[ | 2020 | 中文 | CCKS2017、CCKS2018 | 91.60、89.56 | BERT-Bi-LSTM-CRF |
Kong等[ | 2021 | 中文 | CCKS2017、CCKS2019 | 90.49、85.13 | Multi-Level CNN、Attention Mechanism |
Li等[ | 2021 | 英文 | CoNLL2003 | 92.53 | Modularized interaction network、RNN-BiLSTM-CRF |
Xu等[ | 2021 | 英文 | ACE2004、ACE2005、GENIA | 86.30、85.40、79.60 | Multi-head self-attention、BERT |
Naseem等[ | 2021 | 英文 | NCBI Disease、BC5CDR | 97.18、97.78 | BioALBERT |
Yang等[ | 2021 | 英文 | ACE2004、ACE2005、GENIA | 87.88、87.04、79.08 | Hierarchical transformer |
Wu等[ | 2021 | 中文 | CCKS2017、CCKS2019 | 93.26、82.87 | RoBERTa、Radical-level feature |
Cui等[ | 2021 | 英文 | CoNLL2003 | 92.55 | BART、Multi-template |
Chen等[ | 2021 | 英文 | CoNLL2003 | 93.90 | Prompt-guided attention |
Ma等[ | 2021 | 英文 | CoNLL2003、OntoNotes5.0 | 74.80、72.99 | Template-free、Prompt tuning |
[1] | GRISHMAN R, SUNDHEIM B. Message understanding con-ference-6: a brief history[C]// Proceedings of the 16th Inter-national Conference on Computational Linguistics, Copen-hagen, Aug 5-9, 1996. Stroudsburg: ACL, 1996: 466-471. |
[2] | MERCHANT R, OKUROWSKI M E, CHINCHOR N. The multilingual entity task (MET) overview[C]// Proceedings of the Tipster Text Program Phase II, Vienna, May 6-8, 1996. Stroudsburg: ACL, 1996: 445-447. |
[3] | SANG E F T K, DEMEULDER F. Introduction to the CoNLL-2003 shared task: language-independent named en-tity recognition[C]// Proceedings of the 7th Conference on Natural Language Learning, Held in Cooperation with HLT-NAACL 2003, Edmonton, May 31-Jun 1, 2003. Strouds-burg: ACL, 2003: 142-147. |
[4] | DODDINGTON G R, MITCHELL A, PRZYBOCKI M A, et al. The automatic content extraction (ACE) program - tasks, data, and evaluation[C]// Proceedings of the 4th Inter-national Conference on Language Resources and Evalua-tion, Lisbon, May 26-28, 2004. Stroudsburg: ACL, 2004: 837-840. |
[5] | 孙镇, 王惠临. 命名实体识别研究进展综述[J]. 现代图书情报技术, 2010(6): 42-47. |
SUN Z, WANG H L. Overview on the advance of the resea-rch on named entity recognition[J]. New Technology of Library and Information Service, 2010(6): 42-47. | |
[6] | LI J, SUN A X, HAN J L, et al. A survey on deep learning for named entity recognition[J]. IEEE Transactions on Know-ledge and Data Engineering, 2022, 34(1): 50-70. |
[7] |
李猛, 李艳玲, 林民. 命名实体识别的迁移学习研究综述[J]. 计算机科学与探索, 2021, 15(2): 206-218.
DOI |
LI M, LI Y L, LIN M. Review of transfer learning for named entity recognition[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(2): 206-218. | |
[8] |
赵山, 罗睿, 蔡志平. 中文命名实体识别综述[J]. 计算机科学与探索, 2022, 16(2): 296-304.
DOI |
ZHAO S, LUO R, CAI Z P. Survey of Chinese named en-tity recognition[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(2): 296-304.
DOI |
|
[9] | RAU L F. Extracting company names from text[C]// Procee-dings of the 7th IEEE Conference on Artificial Intelligence Application, Miami, Feb 24, 1991. Washington: IEEE Com-puter Society, 1991: 29-32. |
[10] | KRUPKA G R. SRA: description of the SRA system as used for MUC-6[C]// Proceedings of the 6th Conference on Mess-age Understanding, Columbia, Nov 6-8, 1995. Stroudsburg: ACL, 1995: 221-235. |
[11] | SHAALAN K, RAZA H. NERA: named entity recognition for Arabic[J]. Journal of the American Society for Informa-tion Science and Technology, 2009, 60(8): 1652-1663. |
[12] | 张小衡, 王玲玲. 中文机构名称的识别与分析[J]. 中文信息学报, 1997(4): 22-33. |
ZHANG X H, WANG L L. Identification and analysis of chinese organization and institution names[J]. Journal of Chinese Information Processing, 1997(4): 22-33. | |
[13] | 王宁, 葛瑞芳, 苑春法, 等. 中文金融新闻中公司名的识别[J]. 中文信息学报, 2002, 16(2): 1-6. |
WANG N, GE R F, YUAN C F, et al. Company name iden-tification in Chinese financial domain[J]. Journal of Chinese Information Processing, 2002, 16(2): 1-6. | |
[14] | BIKEL D M, SCHWARTZ R, WEISCHEDEL R M. An algorithm that learns what's in a name[J]. Machine Lear-ning, 1999, 34: 211-231. |
[15] | ZHOU G D, SU J. Named entity recognition using an HMM-based chunk tagger[C]// Proceedings of the 40th Annual Mee-ting on Association for Computational Linguistics, Philadel-phia, Jul 6-12, 2002. Stroudsburg: ACL, 2002: 473-480. |
[16] | 张华平, 刘群. 基于角色标注的中国人名自动识别研究[J]. 计算机学报, 2004, 27(1): 85-91. |
ZHANG H P, LIU Q. Automatic recognition of Chinese personal name based on role tagging[J]. Chinese Journal of Computers, 2004, 27(1): 85-91. | |
[17] | 俞鸿魁, 张华平, 刘群, 等. 基于层叠隐马尔可夫模型的中文命名实体识别[J]. 通信学报, 2006, 27(2): 87-94. |
YU H K, ZHANG H P, LIU Q, et al. Chinese named entity identification using cascaded hidden Markov model[J]. Jour-nal on Communications, 2006, 27(2): 87-94. | |
[18] | BORTHWICK A, STERLING J, AGICHTEIN E, et al. NYU: description of the MENE named entity system as used in MUC-7[C]// Proceedings of the 7th Message Understanding Conference, Virginia, Apr 29-May 1, 1998. Stroudsburg: ACL, 1998: 1-7. |
[19] | BENDER O, OCH F J, NEY H. Maximum entropy models for named entity recognition[C]// Proceedings of the 7th Con-ference on Natural Language Learning, Edmonton, May 31-Jun 1, 2003. Stroudsburg: ACL, 2003: 148-151. |
[20] | 周雅倩, 郭以昆, 黄萱菁, 等. 基于最大熵方法的中英文基本名词短语识别[J]. 计算机研究与发展, 2003, 40(3): 440-446. |
ZHOU Y Q, GUO Y K, HUANG X J, et al. Chinese and English base NP recognition based on a maximum entropy model[J]. Journal of Computer Research and Development, 2003, 40(3): 440-446. | |
[21] | 张玥杰, 徐智婷, 薛向阳. 融合多特征的最大熵汉语命名实体识别模型[J]. 计算机研究与发展, 2008, 45(6): 1004-1010. |
ZHANG Y J, XU Z T, XUE X Y. Fusion of multiple features for Chinese named entity recognition based on maximum entropy model[J]. Journal of Computer Research and Development, 2008, 45(6): 1004-1010. | |
[22] | ISOZAKI H, KAZAWA H. Efficient support vector classi-fiers for named entity recognition[C]// Proceedings of the 19th International Conference on Computational Linguis-tics, Taipei, China, Aug 24-Sep 1, 2002. Stroudsburg: ACL, 2002: 1-7. |
[23] | TAKEUCHI K, COLLIER N. Use of support vector ma-chines in extended named entity recognition[C]// Procee-dings of the 6th Conference on Natural Language Learning, Taipei, China, Aug 24-Sep 1, 2002. Stroudsburg: ACL, 2002: 184-190. |
[24] | 李丽双, 黄德根, 陈春荣, 等. 基于支持向量机的中文文本中地名识别[J]. 大连理工大学学报, 2007, 47(3): 433-438. |
LI L S, HUANG D G, CHEN C R, et al. Identification of location names from Chinese texts based on support vector machine[J]. Journal of Dalian University of Technology, 2007, 47(3): 433-438. | |
[25] | 陈霄, 刘慧, 陈玉泉. 基于支持向量机方法的中文组织机构名的识别[J]. 计算机应用研究, 2008, 25(2): 362-364. |
CHEN X, LIU H, CHEN Y Q. Chinese organization names recognition based on SVM[J]. Application Research of Computers, 2008, 25(2): 362-364. | |
[26] | MCCALLUM A, LI W. Early results for named entity recognition with conditional random fields, feature induc-tion and web-enhanced lexicons[C]// Proceedings of the 7th Conference on Natural Language Learning, Edmonton, May 31-Jun 1, 2003. Stroudsburg: ACL, 2003: 188-191. |
[27] | KRISHNAN V, MANNING C D. An effective two-stage model for exploiting non-local dependencies in named enti-ty recognition[C]// Proceedings of the 21st International Con-ference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Sydney, Jul 17-21, 2006. Stroudsburg: ACL, 2006: 1121-1128. |
[28] | 冯元勇, 孙乐, 张大鲲, 等. 基于小规模尾字特征的中文命名实体识别研究[J]. 电子学报, 2008, 36(9): 1833-1838. |
FENG Y Y, SUN L, ZHANG D K, et al. Study on the Chinese named entity recognition using small scale character tail hints[J]. Acta Electronica Sinica, 2008, 36(9): 1833-1838. | |
[29] | 燕杨, 文敦伟, 王云吉, 等. 基于层叠条件随机场的中文病历命名实体识别[J]. 吉林大学学报(工学版), 2014, 44(6): 1843-1848. |
YAN Y, WEN D W, WANG Y J, et al. Named entity recog-nition in Chinese medical records based on cascaded condi-tional random field[J]. Journal of Jilin University(Enginee-ring and Technology Edition), 2014, 44(6): 1843-1848. | |
[30] | TEIXEIRA J, SARMENTO L, OLIVEIRA E C. A boot-strapping approach for training a NER with conditional ran-dom fields[C]// LNCS 7026: Proceedings of the 15th Portu-guese Conference on Artificial Intelligence, Lisbon, Oct 10-13, 2011. Berlin, Heidelberg: Springer, 2011: 664-678. |
[31] | THENMALAR S, BALAJI J, GEETHA T. Semi-supervised bootstrapping approach for named entity recognition[J]. arXiv:1511.06833, 2015. |
[32] | 黄诗琳, 郑小林, 陈德人. 针对产品命名实体识别的半监督学习方法[J]. 北京邮电大学学报, 2013, 36(2): 20-23. |
HUANG S L, ZHENG X L, CHEN D R. A semi-supervised learning method for product named entity recognition[J]. Journal of Beijing University of Posts and Telecommunica-tions, 2013, 36(2): 20-23. | |
[33] | LONG L Y, YAN J Z, FANG L Y, et al. The identification of Chinese named entity in the field of medicine based on bootstrapping method[C]// Proceedings of the 2014 Interna-tional Conference on Multisensor Fusion and Information Integration for Intelligent Systems, Beijing, Sep 28-29, 2014. Piscataway: IEEE, 2014: 1-6. |
[34] | 王路路, 艾山·吾买尔, 买合木提·买买提, 等. 基于CRF和半监督学习的维吾尔文命名实体识别[J]. 中文信息学报, 2018, 32(11): 16-26. |
WANG L L, AISHAN W, MAIHEMUTI M, et al. A semi-supervised approach to Uyghur named entity recognition based on CRF[J]. Journal of Chinese Information Proces-sing, 2018, 32(11): 16-26. | |
[35] |
ETZIONI O, CAFARELLA M, DOWNEY D, et al. Unsu-pervised named-entity extraction from the web: an experi-mental study[J]. Artificial Intelligence, 2005, 165(1): 91-134.
DOI URL |
[36] | NADEAU D, TURNEY P D, MATWIN S. Unsupervised named-entity recognition: generating gazetteers and resol-ving ambiguity[C]// LNCS 4013: Proceedings of the 19th Conference of the Canadian Society for Computational Stu-dies of Intelligence, Quebec, Jun 7-9, 2006. Berlin, Heidel-berg: Springer, 2006: 266-277. |
[37] | HAN X, KWOH C K, KIM J J. Clustering based active learning for biomedical named entity recognition[C]// Pro-ceedings of the 2016 International Joint Conference on Neural Networks, Vancouver, Jul 24-29, 2016. Piscataway: IEEE, 2016: 1253-1260. |
[38] | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language unders-tanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa-tional Linguistics: Human Language Technologies, Minnea-polis, Jun 2-7, 2019. Stroudsburg: ACL, 2019: 4171-4186. |
[39] | COLLOBERT R, WESTON J, BOTTOU L, et al. Natural language processing (almost) from scratch[J]. Journal of Machine Learning Research, 2011, 12: 2493-2537. |
[40] | YAO L, LIU H, LIU Y, et al. Biomedical named entity reco-gnition based on deep neutral network[J]. International Jour-nal of Hybrid Information Technology, 2015, 8(8): 279-288. |
[41] | STRUBELL E, VERGA P, BELANGER D, et al. Fast and accurate entity recognition with iterated dilated convolu-tions[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Sep 9-11, 2017. Stroudsburg: ACL, 2017: 2670-2680. |
[42] | WU Y H, JIANG M, LEI J B, et al. Named entity recog-nition in Chinese clinical text using deep neural network[C]// Proceedings of the 15th World Congress on Health and Biomedical Informatics, São Paulo, Aug 19-23, 2015: 624-628. |
[43] | WU F Z, LIU J X, WU C H, et al. Neural Chinese named entity recognition via CNN-LSTM-CRF and joint training with word segmentation[C]// Proceedings of the World Wide Web Conference, San Francisco, May 13-17, 2019. New York: ACM, 2019: 3342-3348. |
[44] |
KONG J, ZHANG L X, JIANG M, et al. Incorporating multi-level CNN and attention mechanism for Chinese clini-cal named entity recognition[J]. Journal of Biomedical Informatics, 2021, 116:103737-103743.
DOI URL |
[45] | GUI T, MA R T, ZHANG Q, et al. CNN-based Chinese NER with lexicon rethinking[C]// Proceedings of the 28th Interna-tional Joint Conference on Artificial Intelligence, Macao, China, Aug 10-16, 2019: 4982-4988. |
[46] | HUANG Z H, XU W, YU K. Bidirectional LSTM-CRF mo-dels for sequence tagging[J]. arXiv:1508.01991, 2015. |
[47] | GREGORIC A Z, BACHRACH Y, COOPE S. Named en-tity recognition with parallel recurrent neural networks[C]// Proceedings of the 56th International Conference on Com-putational Linguistics, Melbourne, Jul 15-20, 2018. Strouds-burg: ACL, 2018: 69-74. |
[48] | LI F, WANG Z, HUI S C, et al. Modularized interaction network for named entity recognition[C]// Proceedings of the 59th Annual Meeting of the Association for Compu-tational Linguistics and the 11th International Joint Confe-rence on Natural Language Processing, Aug 1-6, 2021. St-roudsburg: ACL, 2021: 200-209. |
[49] | XU Y X, HUANG H Y, FENG C, et al. A supervised multi-head self-attention network for nested named entity recogni-tion[C]// Proceedings of the 35th AAAI Conference on Arti-ficial Intelligence, the 33rd Conference on Innovative App-lications of Artificial Intelligence, the 11th Symposium on Educational Advances in Artificial Intelligence, Feb 2-9, 2021. Menlo Park: AAAI, 2021: 14185-14193. |
[50] | ZHANG Y, YANG J. Chinese NER using lattice LSTM[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Jul 15-20, 2018. Stroudsburg: ACL, 2018: 1554-1564. |
[51] | LIU W, XU T G, XU Q H, et al. An encoding strategy based word-character LSTM for Chinese NER[C]// Procee-dings of the 2019 Conference of the North American Chap-ter of the Association for Computational Linguistics: Hu-man Language Technologies, Minneapolis, Jun 2-7, 2019. Stroudsburg: ACL, 2020: 5951-5960. |
[52] | MA R T, PENG M L, ZHANG Q, et al. Simplify the usage of lexicon in Chinese NER[C]// Proceedings of the 58th Ann-ual Meeting of the Association for Computational Linguis-tics, Jul 5-10, 2020. Stroudsburg: ACL, 2019: 2379-2389. |
[53] | CETOLI A, BRAGAGLIA S, O'HARNEY A D, et al. Graph convolutional networks for named entity recognition[C]// Proceedings of the 16th International Workshop on Tree-banks and Linguistic Theories, Prague, Jan 23-24, 2018. St-roudsburg: ACL, 2018: 37-45. |
[54] | DING R X, XIE P J, ZHANG X Y, et al. A neural multi-digraph model for Chinese NER with gazetteers[C]// Pro-ceedings of the 57th Conference of the Association for Computational Linguistics, Florence, Jul 28-Aug 2, 2019. Stroudsburg: ACL, 2019: 1462-1467. |
[55] | GUI T, ZOU Y C, ZHANG Q, et al. A lexicon-based graph neural network for Chinese NER[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Langu-age Processing and the 9th International Joint Conference on Natural Language Processing, Hongkong, China, Nov 3-7, 2019. Stroudsburg: ACL, 2019: 1040-1050. |
[56] |
TANG Z, WAN B, YANG L. Word-character graph convo-lution network for Chinese named entity recognition[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2020, 28: 1520-1532.
DOI URL |
[57] | PENG M L, XING X Y, ZHANG Q, et al. Distantly super-vised named entity recognition using positive-unlabeled lear-ning[C]// Proceedings of the 57th Conference of the As-sociation for Computational Linguistics, Florence, Jul 28-Aug 2, 2019. Stroudsburg: ACL, 2019: 2409-2419. |
[58] | YANG Y S, CHEN W L, LI Z H, et al. Distantly supervised NER with partial annotation learning and reinforcement learning[C]// Proceedings of the 27th International Confe-rence on Computational Linguistics, Florence, Jul 28-Aug 2, 2019. Stroudsburg: ACL, 2019: 2159-2169. |
[59] | ZHANG H L, LIU L T, CHENG S Z, et al. Distant super-vision for Chinese temporal tagging[C]// Proceedings of the 3rd China Conference on Knowledge Graph and Semantic Computing, Tianjin, Aug 14-17, 2018. Cham: Springer, 2018: 14-27. |
[60] | 边俐菁. 基于深度学习和远程监督的产品实体识别及其领域迁移研究[D]. 上海: 上海财经大学, 2020. |
BIAN L J. Research on product entity recognition and domain migration based on deep learning and remote super-vision[D]. Shanghai: Shanghai University of Finance and Economics, 2020. | |
[61] | SOUZA F, NOGUEIRA R, LOTUFO R. Portuguese named en-tity recognition using BERT-CRF[J]. arXiv:1909.10649, 2019. |
[62] | NASEEM U, KHUSHI M, REDDY V, et al. BioALBERT: a simple and effective pre-trained language model for bio-medical named entity recognition[C]// Proceedings of the 2021 International Joint Conference on Neural Networks, Shenzhen, Jul 18-22, 2021. Piscataway: IEEE, 2021: 1-7. |
[63] | YANG Z W, MA J, CHEN H C, et al. HiTRANS: a hierar-chical transformer network for nested named entity recogni-tion[C]// Proceedings of the Findings of the Association for Computational Linguistics, Punta Cana, Nov 16-20, 2021. Stroudsburg: ACL, 2021: 124-132. |
[64] | 李妮, 关焕梅, 杨飘, 等. 基于BERT-IDCNN-CRF的中文命名实体识别方法[J]. 山东大学学报(理学版), 2020, 55(1): 102-109. |
LI N, GUAN H M, YANG P, et al. BERT-IDCNN-CRF for named entity recognition in Chinese[J]. Journal of Shan-dong University (Natural Science), 2020, 55(1): 102-109. | |
[65] |
LI X Y, ZHANG H, ZHOU X H. Chinese clinical named entity recognition with variant neural structures based on BERT methods[J]. Journal of Biomedical Informatics, 2020, 107: 103422-103428.
DOI URL |
[66] | WU Y, HUANG J, XU C, et al. Research on named entity recognition of electronic medical records based on RoBERTa and radical-level feature[J]. Wireless Communications and Mobile Computing, 2021: 2489754. |
[67] | YAO L G, HUANG H S, WANG K W, et al. Fine-grained mechanical Chinese named entity recognition based on ALBERT-AttBiLSTM-CRF and transfer learning[J]. Symme-try, 2020, 12(12): 1986-2006. |
[68] | BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[C]// Advances in Neural Information Processing Systems 33, Dec 6-12, 2020: 1877-1901. |
[69] | CUI L Y, WU Y, LIU J, et al. Template based named entity recognition using BART[C]// Proceedings of the Joint Con-ference of the 59th Annual Meeting of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Bangkok, Aug 1-6, 2021. Stroudsburg: ACL, 2021: 1835-1845. |
[70] | CHEN X, ZHANG N, LI L, et al. Lightner: a lightweight generative framework with prompt-guided attention for low-resource NER[J]. arXiv:2109.00720, 2021. |
[71] | MA R, ZHOU X, GUI T, et al. Template-free prompt tuning for few-shot NER[J]. arXiv:2109.13532, 2021. |
[72] | LIU A T, XIAO W, ZHU H, et al. QaNER: prompting ques-tion answering models for few-shot named entity recogni-tion[J]. arXiv:2203.01543, 2022. |
[73] | 金彦亮, 谢晋飞, 吴迪嘉. 基于分层标注的中文嵌套命名实体识别[J]. 上海大学学报(自然科学版), 2021, 27(3): 1-9. |
JIN Y L, XIE J F, WU D J. Chinese nested named entity recognition based on hierarchical tagging[J]. Journal of Shanghai University (Natural Science Edition), 2021, 27(3): 1-9. |
[1] | LYU Xiaoqi, JI Ke, CHEN Zhenxiang, SUN Runyuan, MA Kun, WU Jun, LI Yidong. Expert Recommendation Algorithm Combining Attention and Recurrent Neural Network [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(9): 2068-2077. |
[2] | ZHANG Xiangping, LIU Jianxun. Overview of Deep Learning-Based Code Representation and Its Applications [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(9): 2011-2029. |
[3] | REN Ning, FU Yan, WU Yanxia, LIANG Pengju, HAN Xi. Review of Research on Imbalance Problem in Deep Learning Applied to Object Detection [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(9): 1933-1953. |
[4] | YANG Caidong, LI Chengyang, LI Zhongbo, XIE Yongqiang, SUN Fangwei, QI Jin. Review of Image Super-resolution Reconstruction Algorithms Based on Deep Learning [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(9): 1990-2010. |
[5] | ZHANG Lihua, ZHANG Shunshun. Progress on Machine Learning for Regional Financial Risk Prevention [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(9): 1969-1989. |
[6] | ZENG Fanzhi, XU Luqian, ZHOU Yan, ZHOU Yuexia, LIAO Junwei. Review of Knowledge Tracing Model for Intelligent Education [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(8): 1742-1763. |
[7] | AN Fengping, LI Xiaowei, CAO Xiang. Medical Image Classification Algorithm Based on Weight Initialization-Sliding Window CNN [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(8): 1885-1897. |
[8] | LIU Yi, LI Mengmeng, ZHENG Qibin, QIN Wei, REN Xiaoguang. Survey on Video Object Tracking Algorithms [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1504-1515. |
[9] | ZHAO Xiaoming, YANG Yijiao, ZHANG Shiqing. Survey of Deep Learning Based Multimodal Emotion Recognition [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1479-1503. |
[10] | XIA Hongbin, XIAO Yifei, LIU Yuan. Long Text Generation Adversarial Network Model with Self-Attention Mechanism [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1603-1610. |
[11] | HAN Yi, QIAO Linbo, LI Dongsheng, LIAO Xiangke. Review of Knowledge-Enhanced Pre-trained Language Models [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1439-1461. |
[12] | LIU Yafen, ZHENG Yifeng, JIANG Lingyi, LI Guohe, ZHANG Wenjie. Survey on Pseudo-Labeling Methods in Deep Semi-supervised Learning [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(6): 1279-1290. |
[13] | SUN Fangwei, LI Chengyang, XIE Yongqiang, LI Zhongbo, YANG Caidong, QI Jin. Review of Deep Learning Applied to Occluded Object Detection [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(6): 1243-1259. |
[14] | XIE Xintong, HU Yueyang, LIU Xuanzhe, ZHAO Yaoshuai, JIANG Hai’ou. Rumor Detection Based on Representative User Characteristics Learning Through Propagation [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(6): 1334-1342. |
[15] | CHENG Weiyue, ZHANG Xueqin, LIN Kezheng, LI Ao. Deep Convolutional Neural Network Algorithm Fusing Global and Local Features [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(5): 1146-1154. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/