[1] BANERJEE P S, CHAKRABORTY B, TRIPATHI D, et al. A information retrieval based on question and answering and NER for unstructured information without using SQL[J]. Wireless Personal Communications, 2019, 108(3): 1909-1931.
[2] JIA Y, QI Y, SHANG H, et al. A practical approach to cons-tructing a knowledge graph for cybersecurity[J]. Cyber-security, 2018, 4(1): 53-60.
[3] TO H D, DO P. Extracting triples from Vietnamese text to create knowledge graph[C]//Proceedings of the 2020 12th International Conference on Knowledge and Systems Engi-neering, Can Tho, Nov 12-14, 2020. Piscataway: IEEE, 2020: 219-223.
[4] LI Y, CAO J, WANG Y. Implementation of intelligent question answering system based on basketball knowledge graph[C]//Proceedings of the 2019 IEEE 4th Advanced Informa-tion Technology, Electronic and Automation Control Confe-rence, Chengdu, Dec 20-22, 2019. Piscataway: IEEE, 2019: 2601-2604.
[5] COLLOBERT R, WESTON J, BOTTOU L, et al. Natural lan-guage processing (almost) from scratch[J]. The Journal of Machine Learning Research, 2011, 12: 2493-2537.
[6] ZHOU P, ZHENG S, XU J, et al. Joint extraction of multiple relations and entities by using a hybrid neural network[M]//Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data. Cham: Springer, 2017: 135-146.
[7] LI X, FENG J, MENG Y, et al. A unified MRC framework for named entity recognition[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Lin-guistics. Stroudsburg: ACL, 2020: 5849-5859.
[8] LI J, SUN A, HAN J, et al. A survey on deep learning for named entity recognition[J]. IEEE Transactions on Know-ledge and Data Engineering, 2022, 34(1): 50-70.
[9] MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estima-tion of word representations in vector space[C]//Procee-dings of the 1st International Conference on Learning Re-presentations, Scottsdale, May 2-4, 2013.
[10] PETERS M, NEUMANN M, IYYER M, et al. Deep contex-tualized word representations[C]//Proceedings of the 2018 Conference of the North American Chapter of the Asso-ciation for Computational Linguistics: Human Language Technologies, New Orleans, Jun 1-6, 2018. Stroudsburg: ACL, 2018: 2227-2237.
[11] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language under-standing[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa-tional Linguistics: Human Language Technologies, Minnea-polis, Jun 3-5, 2019. Stroudsburg: ACL, 2019: 4171-4186.
[12] MOORE C R, FARRAG A, ASHKIN E. Using natural lan-guage processing to extract abnormal results from cancer screening reports[J]. Journal of Patient Safety, 2017, 13(3): 138-143.
[13] ZHANG Y, LIU M, HU S, et al. Development and multi-center validation of chest X-ray radiography interpretations based on natural language processing[J]. Communications Medicine, 2021, 1(1): 43.
[14] GAN W, SU B, LI Y. An electric system abnormal analysis framework based on natural language processing[C]//Pro-ceedings of the 2020 8th International Conference on Ad-vanced Cloud and Big Data, Taiyuan, Dec 5-6, 2020. Pisca-taway: IEEE, 2020: 149-152.
[15] CHANG W, XU Z, ZHOU S, et al. Research on detection methods based on Doc2vec abnormal comments[J]. Future Generation Computer Systems, 2018, 86: 656-662.
[16] FELICE M, BRISCOE T. Towards a standard evaluation method for grammatical error detection and correction[C]//Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, May 31-Jun 5, 2015. Stroudsburg: ACL, 2015: 578-587.
[17] SOORAJ S, MANJUSHA K, ANAND KUMAR M, et al. Deep learning based spell checker for Malayalam language[J]. Journal of Intelligent & Fuzzy Systems, 2018, 34(3): 1427-1434.
[18] NGUYEN T T H, JATOWT A, NGUYEN N V, et al. Neural machine translation with BERT for post-OCR error detec-tion and correction[C]//Proceedings of the 2020 ACM/IEEE Joint Conference on Digital Libraries, New York, Aug 1-5, 2020. New York: ACM, 2020: 333-336.
[19] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, Dec 4-7, 2017: 6000-6010.
[20] HOCHREITER S, SCHMIDHUBER J. Long short-term me-mory[J]. Neural Computation, 1997, 9(8): 1735-1780.
[21] CHO K, VAN MERRI?NBOER B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Lan-guage Processing, Doha, Oct 26-28, 2014. Stroudsburg: ACL, 2014: 1724-1734.
[22] DAUPHIN Y N, FAN A, AULI M, et al. Language mode-ling with gated convolutional networks[C]//Proceedings of the 34th International Conference on Machine Learning, Sydney, Aug 7-9, 2017: 933-941.
[23] LAFFERTY J, MCCALLUM A, PEREIRA F C N. Condi-tional random fields: probabilistic models for segmenting and labeling sequence data[C]//Proceedings of the 18th International Conference on Machine Learning, San Fran-cisco, Jun 28-Jul 1, 2001. San Francisco: Morgan Kaufmann Publishers Inc., 2001: 282-289.
[24] FORNEY G D. The viterbi algorithm[J]. Proceedings of the IEEE, 1973, 61(3): 268-278.
[25] ZHOU J T, ZHANG H, JIN D, et al. RoSeq: robust sequence labeling[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 31(7): 2304-2314.
[26] CHUNG J, GULCEHRE C, CHO K, et al. Empirical eva-luation of gated recurrent neural networks on sequence mo-deling[J]. arXiv:1412.3555, 2014.
|