[1] CHEN J, DONG H, WANG X, et al. Bias and debias in recommender system: a survey and future directions[J]. ACM Transactions on Information Systems, 2023, 41(3): 1-39.
[2] ROZANOVA J, VALENTINO M, FREITAS A. Estimating the causal effects of natural logic features in neural NLI models[EB/OL]. [2024-01-05]. https://arxiv.org/abs/2305.08572.
[3] SAXON M, WANG X Y, XU W, et al. PECO: examining single sentence label leakage in natural language inference datasets through progressive evaluation of cluster outliers[C]//Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics.Stroudsburg: ACL, 2023: 3053-3066.
[4] ZHOU F, MAO Y, YU L, et al. Causal-debias: unifying debiasing in pretrained language models and fine-tuning via causal invariant learning[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 4227-4241.
[5] KHETAN V, RAMNANI R, ANAND M, et al. Causal BERT: language models for causality detection between events expressed in text[C]//Proceedings of the 2021 Computing Conference on Intelligent Computing. Cham: Springer, 2022: 965-980.
[6] KORAKAKIS M, VLACHOS A. Improving the robustness of NLI models with minimax training[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 14322-14339.
[7] WANG W, ZHANG Y, LI H, et al. Causal recommendation: progresses and future directions[C]//Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2023: 3432-3435.
[8] TEZUKA T, KUROKI M. An unbiased estimator of the causal effect on the variance based on the back-door criterion in Gaussian linear structural equation models[J]. Journal of Multivariate Analysis, 2023, 197: 105201.
[9] 李源, 马新宇, 杨国利, 等. 面向知识图谱和大语言模型的因果关系推断综述[J]. 计算机科学与探索, 2023, 17(10): 2358-2376.
LI Y, MA X Y, YANG G L, et al. Survey of causal inference for knowledge graphs and large language models[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(10): 2358-2376.
[10] HWANG J D, BHAGAVATULA C, LE BRAS R, et al. (Comet-) atomic 2020: on symbolic and neural commonsense knowledge graphs[C]//Proceedings of the 2021 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2021: 6384-6392.
[11] GORDON A, BEJAN C, SAGAE K. Commonsense causal reasoning using millions of personal stories[C]//Proceedings of the 2011 AAAI Conference on Artificial Intelligence.Menlo Park: AAAI, 2011: 1180-1185.
[12] LUO Z, SHA Y, ZHU K Q, et al. Commonsense causal reasoning between short texts[C]//Proceedings of the 15th International Conference on Principles of Knowledge Representation and Reasoning. Menlo Park: AAAI, 2016: 421-430.
[13] SASAKI S, TAKASE S, INOUE N, et al. Handling multiword expressions in causality estimation[C]//Proceedings of the 12th International Conference on Computational Semantics. Stroudsburg: ACL, 2017.
[14] XIE Z P, MU F T. Distributed representation of words in cause and effect spaces[C]//Proceedings of the 2019 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2019: 7330-7337.
[15] MU F, LI W, XIE Z. Effect generation based on causal reasoning[C]//Findings of the Association for Computational Linguistics: Empirical Methods in Natural Language Processing 2021. Stroudsburg: ACL, 2021: 527-533.
[16] UTAMA P A, MOOSAVI N S, GUREVYCH I. Mind the trade-off: debiasing NLU models without degrading the in-distribution performance[EB/OL]. [2024-01-05]. https://arxiv.org/abs/2005.00315.
[17] JANG T, WANG X. Difficulty-based sampling for debiased contrastive representation learning[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 24039-24048.
[18] MAHABADI R K, BELINKOV Y, HENDERSON J. End-to-end bias mitigation by modelling biases in corpora[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 8706-8716.
[19] LYU Y, LI P, YANG Y, et al. Feature-level debiased natural language understanding[C]//Proceedings of the 2023 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2023: 13353-13361.
[20] SCHUSTER T, SHAH D, YEO Y J S, et al. Towards debia-sing fact verification models[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 3419-3425.
[21] WU Y, GARDNER M, STENETORP P, et al. Generating data to mitigate spurious correlations in natural language inference datasets[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 2660-2676.
[22] 陈建贵, 张儒清, 郭嘉丰, 等. 基于反事实推理的事实验证去偏方法[J]. 中文信息学报, 2023, 37(10): 97-105.
CHEN J G, ZHANG R Q, GUO J F, et al. Counterfactual inference for fact verification debiasing[J]. Journal of Chinese Information Processing, 2023, 37(10): 97-105.
[23] DU L, DING X, SUN Z, et al. Towards stable natural language understanding via information entropy guided debiasing[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 2868-2882.
[24] ZHOU X, BANSAL M. Towards robustifying NLI models against lexical dataset biases[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 8759-8771.
[25] ZANDIE R, SHEKHAR D, MAHOOR M. COGEN: abductive commonsense language generation[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 295-302.
[26] PAUL D, FRANK A. Social commonsense reasoning with multi-head knowledge attention[C]//Findings of the Association for Computational Linguistics: Empirical Methods in Natural Language Processing 2020. Stroudsburg: ACL, 2020: 2969-2980.
[27] DU L, DING X, XIONG K, et al. ExCAR: event graph knowledge enhanced explainable causal reasoning[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2021: 2354-2363.
[28] PEARL J. Theoretical impediments to machine learning with seven sparks from the causal revolution[C]//Proceedings of the 11th ACM International Conference on Web Search and Data Mining. New York: ACM, 2018: 3.
[29] FENG F, ZHANG J, HE X, et al. Empowering language understanding with counterfactual reasoning[C]//Findings of the Association for Computational Linguistics: Association for Computational Linguistics-International Joint Conference on Natural Language Processing 2021. Stroudsburg: ACL, 2021: 2226-2236.
[30] SIA S, BELYY A, ALMAHAIRI A, et al. Logical satisfiability of counterfactuals for faithful explanations in NLI[C]//Proceedings of the 2023 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2023: 9837-9845.
[31] CHEN Z, GAO Q, BOSSELUT A, et al. Disco: distilling counterfactuals with large language models[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 5514-5528.
[32] DU L, DING X, LIU T, et al. Learning event graph knowledge for abductive reasoning[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2021: 5181-5190.
[33] FEDER A, KEITH K A, MANZOOR E, et al. Causal inference in natural language processing: estimation, prediction, interpretation and beyond[J]. Transactions of the Association for Computational Linguistics, 2022, 10: 1138-1158.
[34] BHAGAVATULA C, LE BRAS R, MALAVIYA C, et al. Abductive commonsense reasoning[C]//Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Apr 26-30, 2020.
[35] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2019: 4171-4186.
[36] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[EB/OL]. [2024-01-05]. https://arxiv.org/abs/1907.11692.
[37] CLARK K, LUONG M T, LE Q V, et al. ELECTRA: pre-training text encoders as discriminators rather than generators[C]//Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Apr 26-30, 2020.
[38] ZHU Y, PANG L, LAN Y, et al. L2R2: leveraging ranking for abductive reasoning[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 1961-1964.
[39] ZHAO W X, ZHOU K, LI J, et al. A survey of large language models[EB/OL]. [2024-01-05]. https://arxiv.org/abs/2303.18223. |