[1] KOWSARI K, MEIMANDI K J, HEIDARYSAFA M, et al. Text classification algorithms: a survey[J]. Information, 2019, 10(4): 150.
[2] SCHMIDT A, WIEGAND M. A survey on hate speech detection using natural language processing[C]//Proceedings of the 5th International Workshop on Natural Language Processing for Social Media. Stroudsburg: ACL, 2017: 1-10.
[3] PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[EB/OL]. [2023-10-29]. http:// arxiv.org/abs/1802.05365.
[4] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understan-ding[EB/OL]. [2023-04-12]. http://arxiv.org/abs/1810.04805.
[5] STRUSS J M, SIEGEL M, RUPPENHOFER J, et al. Overview of GermEval Task 2, 2019 Shared Task on the identification of offensive language[C]//Proceedings of the 15th Conference on Natural Language Processing, Friedrich-Alexander-Universität Erlangen-Nürnberg,Oct 9-11, 2019: 352-363.
[6] PARASCHIV A, CERCEL D C. UPB at GermEval-2019 Task 2: BERT-based offensive language classification of German Tweets[C]//Proceedings of the 15th Conference on Natural Language Processing, Friedrich-Alexander-Universität Erlangen-Nürnberg,Oct 9-11, 2019: 297-303.
[7] RISCH J, STOLL A, ZIEGELE M, et al. hpiDEDIS at Germ-Eval 2019: offensive language identification using a German BERT model[C]//Proceedings of the 15th Conference on Natural Language Processing, Friedrich-Alexander-Universität Erlangen-Nürnberg, Oct 9-11, 2019: 231-237.
[8] DENG J, ZHOU J, SUN H, et al. COLD: a benchmark for Chinese offensive language detection[EB/OL]. [2024-01-24]. http://arxiv.org/abs/2201.06025.
[9] XIANG G, FAN B, WANG L, et al. Detecting offensive Tweets via topical feature discovery over a large scale Twitter corpus[C]//Proceedings of the 21st ACM International Conference on Information and Knowledge Management. New York: ACM, 2012: 1980-1984.
[10] CHEN Y, ZHOU Y, ZHU S, et al. Detecting offensive language in social media to protect adolescent online safety[C]//Proceedings of the 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing. Piscataway: IEEE, 2012: 71-80.
[11] DINAKAR K, JONES B, HAVASI C, et al. Common sense reasoning for detection, prevention, and mitigation of cyberbullying[J]. ACM Transactions on Interactive Intelligent Systems, 2012, 2(3): 1-30.
[12] GITARI N D, ZHANG Z, DAMIEN H, et al. A lexicon-based approach for hate speech detection[J]. International Journal of Multimedia and Ubiquitous Engineering, 2015, 10(4): 215-230.
[13] VAN HEE C, LEFEVER E, VERHOEVEN B, et al. Detection and fine-grained classification of cyberbullying events[C]//Recent Advances in Natural Language Processing, Hissar, Sep 7-9, 2015: 672-680.
[14] 李锦, 夏鸿斌, 刘渊. 基于BERT的双特征融合注意力的方面情感分析模型[J]. 计算机科学与探索, 2024, 18(1): 205-216.
LI J, XIA H B, LIU Y. Dual features local-global attention model with BERT for aspect sentiment analysis[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(1): 205-216.
[15] 韩坤, 潘宏鹏, 刘忠轶. 融合BERT多层次特征的短视频网络舆情情感分析研究[J]. 计算机科学与探索, 2024, 18(4): 1010-1020.
HAN K, PAN H P, LIU Z Y. Research on sentiment analysis of short video network public opinion by integrating BERT multi-level features[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(4): 1010-1020.
[16] DADVAR M, TRIESCHNIGG D, ORDELMAN R, et al. Improving cyberbullying detection with user context[C]//Advances in Information Retrieval-35th European Conference on IR Research. Berlin, Heidelberg: Springer, 2013: 693-696.
[17] BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Advances in Neural Information Processing Systems 33, Dec 6-12, 2020: 1877-1901.
[18] WEI J, WANG X, SCHUURMANS D, et al. Chain-of-thought prompting elicits reasoning in large language models[EB/OL]. [2023-04-19]. http://arxiv.org/abs/2201.11903.
[19] ZHOU D, SCHÄRLI N, HOU L, et al. Least-to-most prom-pting enables complex reasoning in large language models[EB/OL]. [2024-03-18]. http://arxiv.org/abs/2205.10625.
[20] LI X L, LIANG P. Prefix-Tuning: optimizing continuous prompts for generation[EB/OL]. [2023-11-13]. http://arxiv.org/abs/2101.00190.
[21] LIU X, ZHENG Y, DU Z, et al. GPT understands, too[EB/OL]. [2024-04-09]. http://arxiv.org/abs/2103.10385.
[22] LIU X, JI K, FU Y, et al. P-Tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks[EB/OL]. [2024-04-09]. http://arxiv.org/abs/2110.07602.
[23] JAHAN M S, OUSSALAH M. A systematic review of hate speech automatic detection using natural language processing[EB/OL]. [2024-03-07]. http://arxiv.org/abs/2106. 00742.
[24] ZHOU J, DENG J, MI F, et al. Towards identifying social bias in dialog systems: frame, datasets, and benchmarks[EB/OL]. [2023-05-15]. http://arxiv.org/abs/2202.08011.
[25] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understanding by generative pre-trai-ning[EB/OL]. [2023-05-15]. http://arxiv.org/abs/2010.03455.
[26] NIKOLOV A, RADIVCHEV V. Nikolov-Radivchev at SemEval-2019 Task 6: offensive Tweet classification with BERT and ensembles[C]//Proceedings of the 13th International Workshop on Semantic Evaluation. Stroudsburg: ACL, 2019: 691-695.
[27] RANASINGHE T, ZAMPIERI M, HETTIARACHCHI H. BRUMS at HASOC 2019: deep learning models for multilingual hate speech and offensive language identification[EB/OL]. [2024-04-27]. http://arxiv.org/abs/2004.06465.
[28] DOWLAGAR S, MAMIDI R. HASOCOne@FIRE-HASOC 2020: using BERT and multilingual BERT models for hate speech detection[EB/OL]. [2024-04-27]. http://arxiv.org/abs/ 2101.09007.
[29] DU Z, QIAN Y, LIU X, et al. GLM: general language model pretraining with autoregressive blank infilling[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 320-335. |