[1] OLTEANU A, CASTILLO C, DIAZ F, et al. CrisisLex: a lexicon for collecting and filtering microblogged communications in crises[C]//Proceedings of the 2014 International AAAI Conference on Web and Social Media. Menlo Park: AAAI, 2014: 376-385.
[2] IMRAN M, MITRA P, CASTILLO C. Twitter as a lifeline: human-annotated Twitter corpora for NLP of crisis-related messages[EB/OL]. [2023-09-21]. https://arxiv.org/abs/1605. 05894.
[3] IMRAN M, CASTILLO C, DIAZ F, et al. Processing social media messages in mass emergency: survey summary[C]// Proceedings of the Companion of the Web Conference 2018, Lyon, Apr 23-27, 2018. New York: ACM, 2018: 507-511.
[4] 李泽荃, 张展, 张瑞新, 等. CrisisNLP-C:中文灾害数据集[J]. 华北科技学院学报, 2019, 16(5): 5.
LI Z Q, ZHANG Z, ZHANG R X, et al. CrisisNLP-C: a Chinese disaster dataset[J]. Journal of North China Institute of Science and Technology, 2019, 16(5): 5.
[5] 吴雪华, 毛进, 陈思菁, 等. 突发事件应急行动支撑信息的自动识别与分类研究[J]. 情报学报, 2021, 40(8): 817-830.
WU X H, MAO J, CHEN S J, et al. Research on automatic recognition and classification of emergency action support information for incidents[J]. Journal of Intelligence, 2021,40(8): 817-830.
[6] 林佳瑞, 程志刚, 韩宇, 等. 基于BERT预训练模型的灾害推文分类方法[J]. 图学学报, 2022, 43(3): 530-536.
LIN J R, CHENG Z G, HAN Y, et al. Disaster related Tweet classification method based on BERT pretraining model[J]. Journal of Graphics, 2022, 43(3): 530-536.
[7] MACDERMOTT á, MOTYLINSKI M, IQBAL F, et al. Using deep learning to detect social media ‘trolls’[J]. Forensic Science International: Digital Investigation, 2022, 43: 301446.
[8] AL-ADHAILEH M H, ALDHYANI T H H, ALGHAMDI A D. Online troll reviewer detection using deep learning techniques[J]. Applied Bionics and Biomechanics, 2022(1): 4637594.
[9] VUJICIC S S, MLADENOVIC M. An approach to automatic classification of hate speech in sports domain on social media[J]. Journal of Big Data, 2023, 10(1): 1-16.
[10] 赵一鸣, 潘沛, 毛进. 基于任务知识融合与文本数据增强的医学信息查询意图强度识别研究[J]. 数据分析与知识发现, 2023, 7(2): 38-47.
ZHAO Y M, PAN P, MAO J. Research on intent strength recognition of medical information query based on task knowledge fusion and text data augmentation[J]. Data Analysis and Knowledge Discovery, 2023, 7(2): 38-47.
[11] 施国良, 陈宇奇. 文本增强与预训练语言模型在网络问政留言分类中的集成对比研究[J]. 图书情报工作, 2021, 65(13): 12.
SHI G L, CHEN Y Q. Research on the integration and comparative study of text augmentation and pre-trained language models in the classification of online governmental sentiments[J]. Journal of Library and Information Service, 2021, 65(13): 12.
[12] CAO L, LIU X, SHEN H. Adaptable focal loss for imbalanced text classification[C]//Proceedings of the 2021 International Conference on Parallel and Distributed Computing: Applications and Technologies. Cham: Springer, 2021: 466-475.
[13] SALTON G, WONG A, YANG C S. A vector space model for automatic indexing[J]. Communications of the ACM, 1975, 18(11): 613-620.
[14] 刘硕, 王庚润, 李英乐, 等. 中文短文本分类技术研究综述[J]. 信息工程大学学报, 2021, 22(3): 304-312.
LIU S, WANG G R, LI Y L, et al. A review of research on Chinese short text classification technology[J]. Journal of University of Information Engineering, 2021, 22(3): 304-312.
[15] 殷亚博, 杨文忠, 杨慧婷, 等. 基于搜索改进的KNN文本分类算法[J]. 计算机工程与设计, 2018, 39(9): 2923-2928.
YIN Y B, YANG W Z, YANG H T, et al. KNN text classification algorithm based on search improvement[J]. Computer Engineering and Design, 2018, 39(9): 2923-2928.
[16] 赵博文. 基于朴素贝叶斯方法的文本分类算法研究[D]. 湘潭: 湘潭大学, 2020.
ZHAO B W. Research on text classification algorithm based on naive Bayesian method[D]. Xiangtan: Xiangtan University, 2020.
[17] 邵旻晖. 决策树典型算法研究综述[J]. 电脑知识与技术,2018, 14(8): 175-177.
SHAO M H. Review of typical algorithms for decision trees[J]. Computer Knowledge and Technology, 2018, 14(8): 175-177.
[18] 张馨月, 宋绍成. 突发事件中基于支持向量机算法的文本分类研究[J]. 信息技术与信息化, 2022(8): 13-16.
ZHANG X Y, SONG S C. Research on text classification based on support vector machine algorithm in emergencies [J]. Information Technology and Informatization, 2022(8): 13-16.
[19] 淦亚婷, 安建业, 徐雪. 基于深度学习的短文本分类方法研究综述[J]. 计算机工程与应用, 2023, 59(4): 43-53.
GAN Y T, AN J Y, XU X. Survey of short text classification methods based on deep learning[J]. Computer Engineering and Applications, 2023, 59(4): 43-53.
[20] KIM Y. Convolutional neural networks for sentence classification[EB/OL]. [2023-09-21]. https://arxiv.org/abs/1408.5882.
[21] CHO K, VAN MERRI?NBOER B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[EB/OL]. [2023-09-21]. https://arxiv.org/abs/1406.1078.
[22] YAO L, MAO C. Graph convolutional networks for text classification[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2019: 7370-7377.
[23] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, Long Beach, Dec 4-9, 2017: 5998-6008.
[24] DAI Z, YANG Z, YANG Y, et al. Transformer-XL: attentive language models beyond a fixed-length context[C]//Proceedings of the 57th Conference of the Association for Computational Linguistics, Florence, Jul 28- Aug 2, 2019. Stroudsburg: ACL, 2019: 2978-2988.
[25] 苏剑林. 鱼与熊掌兼得: 融合检索和生成的SimBERT模型[EB/OL]. [2023-09-21]. https://spaces.ac.cn/archives/7427.
SU J L. Fish and bear’s paw: SimBERT model for fusion of retrieval and generation[EB/OL]. [2023-09-21]. https:// spaces.ac.cn/archives/7427.
[26] DEVLIN, MING W C, KENTON L, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Jun 2-7, 2019. Stroudsburg: ACL, 2019: 4171-4186.
[27] DONG L, YANG N, WANG W, et al. Unified language model pre-training for natural language understanding and generation[C]//Advances in Neural Information Processing Systems 32, Vancouver, Dec 8-14, 2019: 1049-5258.
[28] MASLEJ K V, SARNOV M, JACK J. Use of data augmentation techniques in detection of antisocial behavior using deep learning methods[J]. Future Internet, 2022, 14(9): 260.
[29] WEI J, ZOU K. EDA: easy data augmentation techniques for boosting performance on text classification tasks[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 6381-6387.
[30] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017: 29993007.
[31] JING Z, LI P, WU B, et al. An adaptive focal loss function based on transfer learning for few-shot radar signal intra-pulse modulation classification[J]. Remote Sensing, 2022, 14(8): 1950.
[32] ALJOHANI N R, FAYOUMI A, HASSAN S U. A novel focal-loss and class-weight-aware convolutional neural network for the classification of in-text citations[J]. Journal of Information Science, 2023, 49(1): 79-92.
[33] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[EB/OL]. [2023-09-21]. https://arxiv.org/abs/1907.11692.
[34] OHNSON R, ZHANG T. Deep pyramid convolutional neural networks for text categorization[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2017: 562-570. |