[1] 张鲁宁, 左信, 刘建伟. 零样本学习研究进展[J]. 自动化学报, 2020, 46(1): 1-23.
ZHANG L N, ZUO X, LIU J W. Research and development on zero-shot learning[J]. Acta Automatica Sinica, 2020, 46(1): 1-23.
[2] HU M Q, LIU B, HU M Q, et al. Mining and summarizing customer reviews[C]//Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2004: 168-177.
[3] WANG Y Q, HUANG M L, ZHU X Y, et al. Attention-based LSTM for aspect-level sentiment classification[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2016: 606-615.
[4] SUN C, HUANG L Y, QIU X P. Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence[C]//Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2019: 380-385.
[5] SEOH R, BIRLE I, TAK M, et al. Open aspect target sentiment classification with natural language prompts[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2021: 6311-6322.
[6] 肖泽管, 陈清亮. 融合多种类型语法信息的属性级情感分析模型[J]. 计算机科学与探索, 2022, 16(2): 395-402.
XIAO Z G, CHEN Q L. Aspect-based sentiment analysis model with multiple grammatical information[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(2): 395-402.
[7] TIAN Y H, CHEN G M, SONG Y. Enhancing aspect-level sentiment analysis with word dependencies[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Stroudsburg: ACL, 2021: 3726-3739.
[8] LI W, ZHANG H, WANG M. Dialogue sentiment quadruple analysis: a multilingual dataset[C]//Proceedings of the 2023 International Conference on Computational Linguistics, 2023: 45-60.
[9] LING Y, YU J F, XIA R. Vision-language pre-training for multimodal aspect-based sentiment analysis[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 2149-2159.
[10] SOCHER R, GANJOO M, MANNING C D, et al. Zero-shot learning through cross-modal transfer[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems, 2013: 935-943.
[11] SAPPADLA P V, NAM J, MENCIA E L, et al. Using semantic similarity for multi-label zero-shot classification of text documents[C]//Proceedings of the 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2016: 423-428.
[12] RIOS A, KAVULURU R. Few-shot and zero-shot multi-label learning for structured label spaces[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 3132-3142.
[13] BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2020: 1877-1901.
[14] HALDER K, AKBIK A, KRAPAC J, et al. Task-aware representation of sentences for generic text classification[C]//Proceedings of the 28th International Conference on Computational Linguistics. Stroudsburg: ACL, 2020: 3202-3213.
[15] WANG Y S, CHI T C, ZHANG R H, et al. PESCO: prompt-enhanced self contrastive learning for zero-shot text classification[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 14897-14911.
[16] ZHANG H, LI Y Z, ZHU T F, et al. Commonsense-based adversarial learning framework for zero-shot stance detection[J]. Neurocomputing, 2024, 563: 126943.
[17] YIN W P, HAY J, ROTH D. Benchmarking zero-shot text classification: datasets, evaluation and entailment approach[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 3914-3923.
[18] LIU J, TENG Z Y, CUI L Y, et al. Solving aspect category sentiment analysis as a text generation task[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2021: 4406-4416.
[19] ZHANG H X, ZHANG X F, HUANG H B, et al. Prompt-based meta-learning for few-shot text classification[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2022: 1342- 1357.
[20] PETRONI F, ROCKTASCHEL T, RIEDEL S, et al. Language models as knowledge bases?[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 2463-2473.
[21] SCHICK T, SCHVTZE H. Exploiting cloze-questions for few-shot text classification and natural language inference[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Stroudsburg: ACL, 2021: 255-269.
[22] SCHICK T, SCHVTZE H. It??s not just size that matters: small language models are also few-shot learners[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2021: 2339-2352.
[23] 王昱婷, 刘一伊, 张儒清, 等. 基于提示学习的文本隐式情感分类[J]. 山西大学学报(自然科学版), 2023, 46(3): 509-517.
WANG Y T, LIU Y Y, ZHANG R Q, et al. Learning implicit sentiment via prompt tuning[J]. Journal of Shanxi University (Natural Science Edition), 2023, 46(3): 509-517.
[24] JIANG Z B, XU F, ARAKI J, et al. How can we know what language models know?[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 423-438.
[25] HAVIV A, BERANT J, GLOBERSON A. BERTese: learning to speak to BERT[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Stroudsburg: ACL, 2021: 3618-3623.
[26] GAO T Y, FISCH A, CHEN D Q. Making pre-trained language models better few-shot learners[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2021: 3816-3830.
[27] LI J Y, TANG T Y, NIE J Y, et al. Learning to transfer prompts for text generation[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2022: 3506-3518.
[28] DIAO S Z, WANG P C, LIN Y, et al. Active prompting with chain-of-thought for large language models[EB/OL]. [2024-03-17]. https://arxiv.org/abs/2302.12246.
[29] LI Z K, PENG B L, HE P C, et al. Guiding large language models via directional stimulus prompting[EB/OL]. [2024-03-17]. https://arxiv.org/abs/2302.11520.
[30] FLOR M P, MARIA M, KLINGER R. Natural language inference prompts for zero-shot emotion classification in text across corpora[C]//Proceedings of the 29th International Conference on Computational Linguistics, 2022: 3506-3518.
[31] DATHATHRI S, MADOTTO A, LAN J, et al. Plug and play language models: a simple approach to controlled text generation[EB/OL]. [2024-03-17]. https://arxiv.org/abs/1912.02164. |