[1] LEHNERT W G. The process of question answering: a computer simulation of cognition[M]. Bristol: Taylor & Francis, 2022.
[2] NAUN C C. Book review: introduction to modern information retrieval[J]. Library Resources & Technical Services, 2011, 55(4): 239-240.
[3] HURLEY P J. A concise introduction to logic[M]. Cengage Learning, 2014.
[4] QIU B, CHEN X, XU J, et al. A survey on neural machine reading comprehension[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1906.03824.
[5] STORKS S, GAO Q, CHAI J Y. Commonsense reasoning for natural language understanding: a survey of benchmarks, resources, and approaches[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1904.01172.
[6] 朱斯琪, 过弋, 王业相. 基于深度交互融合网络的多跳机器阅读理解[J]. 中文信息学报, 2022, 36(5): 67-75.
ZHU S Q, GUO Y, WANG Y X. Deep interactive fusion network for multi-hop reading comprehension[J]. Journal of Chinese Information Processing, 2022, 36(5): 67-75.
[7] LIU J, CUI L, LIU H, et al. LogiQA: a challenge dataset for machine reading comprehension with logical reasoning[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2007.08124.
[8] LIU H, LIU J, CUI L, et al. LogiQA 2.0—an improved dataset for logical reasoning in natural language understanding[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023, 31: 2947-2962.
[9] YU W, JIANG Z, DONG Y, et al. ReClor: a reading compre-hension dataset requiring logical reasoning[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2002.04326.
[10] HUANG L, BRAS R L, BHAGAVATULA C, et al. Cosmos QA: machine reading comprehension with contextual commonsense reasoning[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1909.00277.
[11] YANG Z, QI P, ZHANG S, et al. HotpotQA: a dataset for diverse, explainable multi-hop question answering[EB/OL].[2023-09-23]. https://arxiv.org/abs/1809.09600.
[12] DUA D, WANG Y, DASIGI P, et al. DROP: a reading comprehension benchmark requiring discrete reasoning over paragraphs[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1903.00161.
[13] QUINE W V O. Philosophy of logic[M]. Harvard University Press, 1986.
[14] BHAGAVATULA C, BRAS R L, MALAVIYA C, et al. Abductive commonsense reasoning[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1908.05739.
[15] DEUTSCHER G. On the misuse of the notion of ‘abduction’in linguistics[J]. Journal of Linguistics, 2002, 38(3): 469-485.
[16] PSILLOS S. An explorer upon untrodden ground: Peirce on abduction[M]//Handbook of the History of Logic. Amsterdam: North-Holland, 2011: 117-151.
[17] JIN H, LUO Y, GAO C, et al. ComQA: question answering over knowledge base via semantic matching[J]. IEEE Access, 2019, 7: 75235-75246.
[18] GU Y, PAHUJA V, CHENG G, et al. Knowledge base question answering: a semantic parsing perspective[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2209.04994.
[19] 王小捷, 白子薇, 李可, 等. 机器阅读理解的研究进展[J]. 北京邮电大学学报, 2019, 42(6): 1-9.
WANG X J, BAI Z W, LI K, et al. Survey on machine reading comprehension[J]. Journal of Beijing University of Posts and Telecommunications, 2019, 42(6): 1-9.
[20] LIU S, ZHANG X, ZHANG S, et al. Neural machine reading comprehension: methods and trends[J]. Applied Sciences, 2019, 9(18): 3698.
[21] ZENG C, LI S, LI Q, et al. A survey on machine reading comprehension—tasks, evaluation metrics and benchmark datasets[J]. Applied Sciences, 2020, 10(21): 7640.
[22] 顾迎捷, 桂小林, 李德福, 等. 基于神经网络的机器阅读理解综述[J]. 软件学报, 2020, 31(7): 2095-2126.
GU Y J, GUI X L, LI D F, et al. Survey of machine reading comprehension based on neural network[J]. Journal of Software, 2020, 31(7): 2095-2126.
[23] VARGHESE N, PUNITHAVALLI M. A comprehensive survey on machine reading comprehension: models, benchmarked datasets, evaluation metrics, and trends[C]//Proceedings of the 2022 Congress on Intelligent Systems. Singapore: Springer, 2022: 1-15.
[24] 倪艺函, 兰艳艳, 庞亮, 等. 多跳式文本阅读理解方法综述[J]. 中文信息学报, 2022, 36(11): 1-19.
NI Y H, LAN Y Y, PANG L, et al. A survey of multi-hop reading comprehension for text[J]. Journal of Chinese Information Processing, 2022, 36(11): 1-19.
[25] HABERNAL I, WACHSMUTH H, GUREVYCH I, et al. The argument reasoning comprehension task: identification and reconstruction of implicit warrants[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1708.01425.
[26] SINHA K, SODHANI S, DONG J, et al. CLUTRR: a diagnostic benchmark for inductive reasoning from text[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1908.06177.
[27] SUN K, YU D, CHEN J, et al. Dream: a challenge data set and models for dialogue-based reading comprehension[J]. Transactions of the Association for Computational Linguistics, 2019, 7: 217-231.
[28] NIE Y, WILLIAMS A, DINAN E, et al. Adversarial NLI: a new benchmark for natural language understanding[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1910.14599.
[29] GURURANGAN S, SWAYAMDIPTA S, LEVY O, et al. Annotation artifacts in natural language inference data[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1803.02324.
[30] CUI L, WU Y, LIU S, et al. MuTual: a dataset for multi-turn dialogue reasoning[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2004.04494.
[31] HUANG H Y, CHOI E, YIH W. FlowQA: grasping flow in history for conversational machine comprehension[EB/OL].[2023-09-23]. https://arxiv.org/abs/1810.06683.
[32] SEO M, KEMBHAVI A, FARHADI A, et al. Bidirectional attention flow for machine comprehension[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1611.01603.
[33] GARCEZ A A, BADER S, BOWMAN H, et al. Neural-symbolic learning and reasoning: a survey and interpretation[J]. Neuro-Symbolic Artificial Intelligence: The State of the Art, 2022, 342(1): 327.
[34] CHEN X, LIANG C, YU A W, et al. Neural symbolic reader: scalable integration of distributed and symbolic representations for reading comprehension[C]//Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Apr 26-30, 2020.
[35] GARCEZ A A, BESOLD T R, DE RAEDT L, et al. Neural-symbolic learning and reasoning: contributions and challenges[C]//Proceedings of the 2015 AAAI Spring Symposia. Menlo Park: AAAI, 2015.
[36] DONG H, MAO J, LIN T, et al. Neural logic machines[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1904.11694.
[37] GUPTA N, LIN K, ROTH D, et al. Neural module networks for reasoning over text[EB/OL]. [2023-09-23]. https://arxiv.org/abs/1912.04971.
[38] WANG S, ZHONG W, TANG D, et al. Logic-driven context extension and data augmentation for logical reasoning of text[C]//Findings of the Association for Computational Linguistics: ACL 2022. Stroudsburg: ACL, 2022: 1619-1629.
[39] LI X, CHENG G, CHEN Z, et al. AdaLoGN: adaptive logic graph network for reasoning-based machine reading comprehension[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2203.08992.
[40] SCHLICHTKRULL M, KIPF T N, BLOEM P, et al. Modeling relational data with graph convolutional networks[C]//Pro-ceedings of the 15th International Conference on Semantic Web, Heraklion, Jun 3-7, 2018. Cham: Springer, 2018: 593-607.
[41] SUN Y, CHENG G, QU Y. Reading comprehension with graph-based temporal-casual reasoning[C]//Proceedings of the 27th International Conference on Computational Linguistics. Stroudsburg: ACL, 2018: 806-817.
[42] HUANG Y, FANG M, CAO Y, et al. DAGN: discourse-aware graph network for logical reasoning[EB/OL]. [2023-09-23].https://arxiv.org/abs/2103.14349.
[43] OUYANG S, ZHANG Z, ZHAO H. Fact-driven logical reasoning[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2105.10334.
[44] CHEN J, ZHANG Z, ZHAO H. Modeling hierarchical reasoning chains by linking discourse units and key phrases for reading comprehension[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2306.12069.
[45] XU F, LIU J, LIN Q, et al. Logiformer: a two-branch graph transformer network for interpretable logical reasoning[C]//Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2022: 1055-1065.
[46] JIAO F, GUO Y, SONG X, et al. MERIt: meta-path guided contrastive learning for logical reasoning[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2203.00357.
[47] SANYAL S, XU Y, WANG S, et al. APOLLO: a simple approach for adaptive pretraining of language models for logical reasoning[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2212.09282.
[48] PI X, ZHONG W, GAO Y, et al. LogiGAN: learning logical reasoning via adversarial pre-training[C]//Advances in Neural Information Processing Systems 35, New Orleans, Nov 28-Dec 9, 2022: 16290-16304.
[49] XU Z, YANG Z, CUI Y, et al. IDOL: indicator-oriented logic pre-training for logical reasoning[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2306.15273.
[50] LIU H, TENG Z, CUI L, et al. LogiCoT: logical chain-of-thought instruction-tuning data collection with GPT-4[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2305.12147.
[51] BAO Q, PENG A Y, DENG Z, et al. Contrastive learning with logic-driven data augmentation for logical reasoning over text[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2305.12599.
[52] LAWRENCE J. Introduction to neural networks[M]. Nevada: California Scientific Software, 1993.
[53] ZHOU J, CUI G, HU S, et al. Graph neural networks: a review of methods and applications[J]. AI Open, 2020, 1: 57-81.
[54] WU Z, PAN S, CHEN F, et al. A comprehensive survey on graph neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 32(1): 4-24.
[55] ZHANG S, TONG H, XU J, et al. Graph convolutional networks: a comprehensive review[J]. Computational Social Networks, 2019, 6(1): 1-23.
[56] LI R, WANG S, ZHU F, et al. Adaptive graph convolutional neural networks[C]//Proceedings of the 2018 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2018: 3546-3553.
[57] LIU X, ZHANG F, HOU Z, et al. Self-supervised learning: generative or contrastive[J]. IEEE Transactions on Knowledge and Data Engineering, 2021, 35(1): 857-876.
[58] ZENG A, LIU X, DU Z, et al. GLM-130B: an open bilingual pre-trained model[EB/OL]. [2023-09-23]. https://arxiv.org/abs/2210.02414.
[59] BAI J, BAI S, CHU Y, et al. Qwen technical report[EB/OL].[2023-10-15]. https://arxiv.org/abs/2309.16609.
[60] ZHANG H, CHEN J, JIANG F, et al. HuatuoGPT, towards taming language model to be a doctor[EB/OL]. [2023-10-15]. https://arxiv.org/abs/2305.15075.
[61] DAN Y, LEI Z, GU Y, et al. EduChat: a large-scale language model-based chatbot system for intelligent education[EB/OL]. [2023-10-15]. https://arxiv.org/abs/2308.02773.
[62] HOWARD J, RUDER S. Universal language model fine-tuning for text classification[EB/OL]. [2023-10-15]. https://arxiv.org/abs/1801.06146.
[63] CETTO M, NIKLAUS C, FREITAS A, et al. Graphene: semantically-linked propositions in open information extraction[EB/OL]. [2023-10-15]. https://arxiv.org/abs/1807.11276.
[64] MANN W C, THOMPSON S A. Rhetorical structure theory: toward a functional theory of text organization[J]. Text-Interdisciplinary Journal for the Study of Discourse, 1988, 8(3): 243-281.
[65] PRASAD R, DINESH N, LEE A, et al. The Penn discourse treebank 2.0[C]//Proceedings of the 2008 International Conference on Language Resources and Evaluation, Marrakech, May 26-Jun 1, 2008.
[66] COHAN A, DERNONCOURT F, KIM D S, et al. A discourse-aware attention model for abstractive summarization of long documents[EB/OL]. [2023-10-15]. https://arxiv.org/abs/1804.05685.
[67] HONNIBAL M, MONTANI I. SpaCy2: natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing[J]. To Appear, 2017, 7(1): 411-420.
[68] LEVI F W. Finite geometrical systems: six public lectures delivered in February[M]. Calcutta: University of Calcutta, 1942.
[69] CUI Y, CHEN Z, WEI S, et al. Attention-over-attention neural networks for reading comprehension[EB/OL]. [2023-10-15]. https://arxiv.org/abs/1607.04423.
[70] DHINGRA B, LIU H, YANG Z, et al. Gated-attention readers for text comprehension[EB/OL]. [2023-10-15]. https://arxiv.org/abs/1606.01549.
[71] BECK D, HAFFARI G, COHN T. Graph-to-sequence learning using gated graph neural networks[EB/OL]. [2023-10-15]. https://arxiv.org/abs/1806.09835.
[72] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[EB/OL]. [2023-10-15]. https://arxiv.org/abs/1810.04805.
[73] DAVID B, ROSEMARY K, DAVID W, et al. Reflection: turning experience into learning[M]. London: Routledge, 2013.
[74] DI STEFANO G, GINO F, PISANO G P, et al. Making experience count: the role of reflection in individual learning[M]. Cambridge: Harvard Business School, 2016.
[75] LIU H, NING R, TENG Z, et al. Evaluating the logical reasoning ability of ChatGPT and GPT-4[EB/OL]. [2023-10-15]. https://arxiv.org/abs/2304.03439.
[76] YE S, HWANG H, YANG S, et al. In-context instruction learning[EB/OL]. [2023-10-15]. https://arxiv.org/abs/2302.14691.
[77] CHANG E Y. Examining GPT-4: capabilities, implications and future directions[C]//Proceedings of the 10th International Conference on Computational Science and Computational Intelligence, 2023.
[78] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: open and efficient foundation language models[EB/OL]. [2023-10-15]. https://arxiv.org/abs/2302.13971.
[79] LIU P, YUAN W, FU J, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing[J]. ACM Computing Surveys, 2023, 55(9): 1-35.
[80] WANG Z, ZHANG Z, LEE C Y, et al. Learning to prompt for continual learning[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Piscataway: IEEE, 2022: 139-149.
[81] KOJIMA T, GU S S, REID M, et al. Large language models are zero-shot reasoners[C]//Advances in Neural Information Processing Systems 35, New Orleans, Nov 28-Dec 9, 2022: 22199-22213.
[82] KIM S, JOO S J, KIM D, et al. The CoT collection: improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning[EB/OL]. [2023-10-15]. https://arxiv.org/abs/2305.14045.
[83] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[EB/OL]. [2023-10-15].https://arxiv.org/abs/1907.11692.
[84] KUBLIK S, SABOO S. GPT-3[M]. Sebastopol: O??Reilly Media, Inc., 2022.
[85] YANG Z, DAI Z, YANG Y, et al. XLNet: generalized autoregressive pretraining for language understanding[C]//Advances in Neural Information Processing Systems 32, Vancouver,Dec 8-14, 2019: 5754-5764.
[86] PENG B, LI C, HE P, et al. Instruction tuning with GPT-4[EB/OL]. [2023-10-15]. https://arxiv.org/abs/2304.03277.
[87] QIAO S, OU Y, ZHANG N, et al. Reasoning with language model prompting: a survey[EB/OL]. [2023-10-15]. https://arxiv.org/abs/2212.09597. |