[1] CAI H J, TU Y F, ZHOU X S, et al. Aspect-category based sentiment analysis with hierarchical graph convolutional network[C]//Proceedings of the 28th International Conference on Computational Linguistics. Stroudsburg: ACL, 2020: 833-843.
[2] SCHMITT M, STEINHEBER S, SCHREIBER K, et al. Joint aspect and polarity classification for aspect-based sentiment analysis with end-to-end neural networks[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 1109-1114.
[3] LIANG B, SU H, YIN R D, et al. Beta distribution guided aspect-aware graph for aspect category sentiment analysis with affective knowledge[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2021: 208-218.
[4] DAI Z H, DAI W, LIU Z H, et al. Multi-task multi-head attention memory network for fine-grained sentiment analysis[C]//Proceedings of the 8th CCF International Conference on Natural Language Processing and Chinese Computing. Cham: Springer, 2019: 609-620.
[5] GUO X, ZHANG G, WANG S G, et al. Multi-way matching based fine-grained sentiment analysis for user reviews[J]. Neural Computing and Applications, 2020, 32(10): 5409-5423.
[6] FU Y J, LIAO J, LI Y, et al. Multiple perspective attention based on double BiLSTM for aspect and sentiment pair extract[J]. Neurocomputing, 2021, 438: 302-311.
[7] HU M T, ZHAO S W, ZHANG L, et al. CAN: constrained attention networks for multi-aspect sentiment analysis[EB/OL]. [2024-04-23]. https://arxiv.org/abs/1812.10735.
[8] 孙小婉, 王英, 王鑫, 等. 面向双注意力网络的特定方面情感分析模型[J]. 计算机研究与发展, 2019, 56(11): 2384-2395.
SUN X W, WANG Y, WANG X, et al. Aspect-based sentiment analysis model based on dual-attention networks[J]. Journal of Computer Research and Development, 2019, 56(11): 2384-2395.
[9] 张文轩, 殷雁君, 智敏. 用于方面级情感分析的情感增强双图卷积网络[J]. 计算机科学与探索, 2024, 18(1): 217-230.
ZHANG W X, YIN Y J, ZHI M. Affection enhanced dual graph convolution network for aspect based sentiment analysis[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(1): 217-230.
[10] XUE W, LI T. Aspect based sentiment analysis with gated convolutional networks[EB/OL]. [2024-04-23]. https://arxiv.org/abs/1805.07043.
[11] LIU P F, YUAN W Z, FU J L, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing[J]. ACM Computing Surveys, 2023, 55(9): 1-35.
[12] JIANG Q, CHEN L T, XU R F, et al. A challenge dataset and effective models for aspect-based sentiment analysis[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2019: 6280-6285.
[13] YIN R D, SU H, LIANG B, et al. Extracting the collaboration of entity and attribute: gated interactive networks for aspect sentiment analysis[C]//Proceedings of the 9th CCF International Conference on Natural Language Processing and Chinese Computing. Cham: Springer, 2020: 802-814.
[14] LI Y C, YIN C X, ZHONG S H, et al. Multi-instance multi-label learning networks for aspect-category sentiment analysis[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2010.02656.
[15] SCHICK T, SCHüTZE H. Exploiting cloze-questions for few-shot text classification and natural language inference[C]//Proceedings of the 2022 Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 255-269.
[16] CHEN X, ZHANG N Y, XIE X, et al. KnowPrompt: knowledge-aware prompt-tuning with synergistic optimization for relation extraction[C]//Proceedings of the ACM Web Conference 2022. New York: ACM, 2022: 2778-2788.
[17] SCHICK T, SCHMID H, SCHüTZE H. Automatically identifying words that can serve as labels for few-shot text classification[EB/OL]. [2024-05-14]. https://arxiv.org/abs/2010. 13641.
[18] LIU X, ZHENG Y N, DU Z X, et al. GPT understands, too[J]. AI Open, 2024, 5: 208-215.
[19] GAO T Y, FISCH A, CHEN D Q. Making pre-trained language models better few-shot learners[EB/OL]. [2024-05-14]. https://arxiv.org/abs/2012.15723.
[20] HU S D, DING N, WANG H D, et al. Knowledgeable prompt-tuning: incorporating knowledge into prompt verbalizer for text classification[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 2225-2240.
[21] BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020: 1877-1901.
[22] SHIN T, RAZEGHI Y, LOGAN R L, et al. AutoPrompt: eliciting knowledge from language models with automatically generated prompts[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 4222-4235.
[23] YANG Z L, DAI Z H, YANG Y M, et al. XLNet: generalized autoregressive pretraining for language understanding[C]//Advances in Neural Information Processing Systems 32, 2019: 5754-5764.
[24] DAI Z H, YANG Z L, YANG Y M, et al. Transformer-XL: attentive language models beyond a fixed-length context[EB/OL]. [2024-05-14]. https://arxiv.org/abs/1901.02860.
[25] LI Y F, GUO L Z, ZHOU Z H. Towards safe weakly supervised learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(1): 334-346.
[26] PONTIKI M, GALANIS D, PAPAGEORGIOU H, et al. SemEval-2015 Task 12: aspect based sentiment analysis[C]//Proceedings of the 9th International Workshop on Semantic Evaluation. Stroudsburg: ACL, 2015: 486-495.
[27] PONTIKI M, GALANIS D, PAPAGEORGIOU H, et al. SemEval-2016 Task 5: aspect based sentiment analysis[C]// Proceedings of the 10th International Workshop on Semantic Evaluation. Stroudsburg: ACL, 2016: 19-30. |